Index

2.22.2018

Work, Play, and the (Oculus) Rift that Divides


I currently have three ongoing projects:



Creating toolpaths in PixelCNC using its own logo as image input.

PixelCNC:

My most recent endeavor, PixelCNC, was started at the end of summer 2017. It has since been released in an alpha early-access state, with a few big items left on the todo list in order to get it where I really want it to be.

PixelCNC relies on image-based operations to generate CNC toolpaths from image input. A near-infinite speedup in toolpath generation could be had by moving the image processing code to GPU. With my expertise and nearly 20 years of experience with OpenGL this is not too much of a hurdle to overcome, at least as far as planning it out and solving the problem itself. The most difficult aspect is making the commitment to spending the time mentally exerting myself.

A larger and less clear goal would shift PixelCNC toward the realm of image editing and manipulation - where adding the ability to create and edit images as a whole new program mode would further remove the necessary step of using a separate dedicated program specifically for creating an image to feed into PixelCNC. The need for dealing with an image manipulation program could be reduced and/or eliminated, further streamlining the workflow for artistic CNC endeavors. That is, after all, the entire point of PixelCNC. I have a few ideas concerning what an image editing mode would comprise, including some things never before seen in an image editing program that lend themselves really well to sculpting depthmaps easily and intuitively - and would build on the existing image processing functionality I've already written.

One more decently sized feature I'd like to add is an auto-update system, which would be introduced once PixelCNC enters beta, and would free up users' time so that they no longer need to manually download and install updated versions of PixelCNC as they are released.

There's a bunch of other little things on the todo list for PixelCNC that aren't exactly along the path, and just require the time and effort. These comprise less consequential but still useful or handy things. A few to give you an idea are:

  • Functionality to detect when a series of toolpath segments fit a circular arc within a given threshold and replace them with G03/G04 circular arc motions.
  • User-defined presets for CNC operations, so users can quickly create an operation they use frequently without having to edit each parameter and rebuild operations from scratch.
  • Defining rectangular/cylindrical stock shapes for confining generated toolpaths to. This is trickier than it sounds, simply because of how PixelCNC works.
  • Inlay generation mode, which would build on the existing medial-axis carving operation to allow the creation of a negative carving - whatever operations that would entail - to perfectly fit over an existing medial-axis carve operation.
  • Automatic G-code export by tool, which would build on the existing ability to toggle which operations are included in exported G-code, so that users could easily create CNC programs which will perform all operations concerning each tool individually.
  • Polygon Operation: similar to the spiral operation, except the spiral would be an N-sided polygon so that toolpaths could be concentric triangles, squares, hexagons, and so on.
  • Mesh export: allowing users to export the heightfield meshes that PixelCNC generates for visualization and certain CAM algorithms.

Another thing that I'm planning and setting up for is recording a sort of demonstration video for PixelCNC, which shows the entire process going from an image to creating a project in PixelCNC, defining tools, setting operations, using the simulation mode, exporting the G-code, loading it up into a CNC controller, and actually cutting stuff. This could also be cut down into a short and concise promotional video to serve as something the public could pass around to share the fact of PixelCNC's existence to efficiently and effectively get the idea across. I'm still deciding what exactly I want to actually demonstrate, because running the CNC is always a bit of an energy, time, and raw material commitment so I want to be sure of what I decide to do before I go ahead with the requisite expenditures.





Holocraft:

A less recent but related project, Holocraft, which is a much more esoteric CNC related pursuit consists of a program that is in a less user-friendly state of partial disrepair. I could fix it up a bit, and begin selling it as well, which was at a time the tentative plan. The real plan was to sell actual holograms, but the lack of access to a CNC capable enough of realizing that vision put a relative end to that for the time being. An old friend I recently got in touch with disclosed the fact that he's been working to set up a hacker space which possesses machines that could make my original dream a reality.



Another idea with Holocraft is to incorporate it into PixelCNC instead, as an operation the user would generate a toolpath for on a loaded image. The trick there is that Holocraft specifically relies on 3D geometry input which it then generates toolpaths for forming reflective optics that will recreate some representation of that geometry when viewed under a point light source.

Decisions, decisions..






Bitphoria:

My biggest project, at least that I've invested the most time and energy into over recent years, is my game/engine Bitphoria. It's the culmination of a lifetime of learning all-things-gamedev, and virtually every novel game idea I've ever had. I take pride in the fact that it's written from scratch, and does things differently, but it's not quite "there" insofar as the visual polish and aesthetic are concerned. The actual 'game' aspect itself is largely incomplete, but it is basically ready to be made into a wide array of games. However, due to recent developments I've begun tinkering around with Bitphoria again, inbetween incremental PixelCNC updates, with a newfound vision for what it is meant to be.


The Oculus Go being announced at OC4 by Zuckerberg hisself.

By 'recent developments' I am referring to the fact that I finally caught the VR bug back in October, when the Oculus Go was announced during their OC4 event down in San Jose (it was San Jose, right?). Something just clicked, and I decided that I would acquire a Go as soon as humanly possible and make 2018 the year I dive into VR development after wrapping up PixelCNC. I am convinced that the lower price point and removal of the requirement for owning a high end smartphone, PC, or game console will prove to be fruitful for the VR industry as a whole. More people will suddenly find low-end VR affordable, which will result in many more people being exposed to at least some form of quality VR - and not some shoddy makeshift excuse for VR like Google Cardboard (at least when used with phones that have poor sensor quality or clogged up Android systems that imbue apps with unbearable motion-to-photon latencies). Only then will they know the reason VR is here to stay!


"First Contact", the Oculus Rift demo I tried at Best Buy.

After a few months of watching PC VR headsets dropping in price I decided that I didn't want to limit myself to 3DOF (three degress of freedom) or the little directional remote controller, and started looking at different headsets. Eventually I found myself at Best Buy and demoed "First Contact" on the Rift. What a far cry from the DK2 that I had tried at a friend's house a few years prior! The Touch controllers made a world of difference, being able to actually interact with a virtual scene a universe away with my own hands was unlike anything I had ever imagined.


You forget you're holding controllers, and feel like you're grabbing things.

While my wife was ordering a new PSU on New Egg I told her to go ahead and order a Rift as well. I've been a proud owner of the Rift for a month now and have already been integrating the Oculus PC SDK into Bitphoria. There has been a lot of work that needed to be done, especially due to the fact that the vast majority of a game in Bitphoria is described in external text files, I needed to introduce some means for connecting entities to the controllers and responding to different input states with the buttons and thumbsticks.


This is what scripting simple VR player flying controls to Bitphoria looks like.

There's still a bit of work to go before it's fully integrated, at which point I can release Bitphoria as a means for people to quickly and easily script all manner of multiplayer games without having any real gamedev or modding experience. Ultimately I'd like to expand on the existing game bitfile system and circumvent the scripting system altogether by crafting a WYSIWIG game editor, which allows users to craft games directly in VR. Bitphoria would then be the first VR-based game making system! No more screwing around with Unity or Unreal, just fire up Bitphoria and start building games.


Even if you already know everything, you have to learn how *they* do it!

However, Bitphoria's codebase has begun showing its age already. The things that I wish I could've done differently are piling up, and make it difficult to work inside of. So, the plan for now is to just focus on making a single cool game out of Bitphoria, while promoting the scripting side of things to get people interested and involved in being able to make their own VR games with it. Ultimately, though, I plan to rebuild the engine from scratch - borrowing a lot of code and engine structure from the existing engine, but re-implementing everything more cleanly and with in-engine game editing in mind.

There are several components that would require specially designed and implemented WYSIWIG interfaces for crafting Bitphoria games:

  1. Entity Types - Describes each possible entity type, serving as a sort of template. Dictates the various aspects pertaining to a game entity, such as what physics behaviors they have, what effects flags they have set, their collision volume and its size, what entity functions various logic states can trigger, any ambient/looping audio, particle emissions, appearance, etc..
  2. Entity Functions - These are executable lists of instructions that produce specific entity behaviors as a result of different internal and external logic states or triggers that the engine detects through physics, player interaction, scripted timers and conditional statements becoming satisfied, and the like.
  3. Model Procedures - Lists of modeling operations which produce geometry for entity appearances by plotting out points, lines, triangles, and signed-distance function primitives for modeling voxel-based geometries with constructive solid geometry conventions. These can be made to vary in a number of ways with each generated instance of a procedure, allowing entities to not appear exactly identical to others of the same type.
  4. Dynamic Meshes - 3D point clouds of 'nodes' attached together using springs. Springs can be assigned procedural models to give the 'dynamesh' visual form and the appearance of multiple conjoined moving parts. Dynameshes allow entities to appear to have more dynamic physics interactions with the environment and other entities as well as allow for simple/crude animations. Otherwise entities would be restricted to appearing only as rigid static geometry.
  5. Entity HUD Overlays - Procedural models assigned to entity types to display various stats and visual indicators conveying the state of the entity. Entity state and properties can drive modifiers to animate color, orientation, size, and position of the models drawn to allow for a variety of interesting HUD elements (aka 'widgets') to be drawn over an entity in a player's perspective.
  6. World Prefabs - A more recent idea that I'm still toying with: world prefabs would consist of simple voxel models which the world-generation algorithm randomly places around the map per various modifier flags and statistical 'tendencies' specified for each prefab, providing some semblance of structure and design to worlds beyond what little is offered by the existing random plateaus/caves/pits. These could also have entity spawns placed in them so they can serve an actual function during gameplay.
Designing and implementing intuitive interfaces for users to define Bitphoria games with with - and have immediate feedback for quick iteration/turnaround time - that's a task unto itself. VR is a young medium that we're still becoming familiar with, and exploring the language of, so there's the added challenge of discovering what even works and what doesn't. I imagine that VR would allow for much more intuitive interfaces than 2D does, especially when it comes to crafting 3D content. We're still figuring it all out, all developers are, collectively. It's a bit of a wild-west.

Regardless, I believe it will be a powerful thing giving the average person the ability to easily create their own gaming experiences for VR with the right tools to enable them to quickly and easily produce quality interactive gaming experiences. Bitphoria's scripting system is designed to really reduce each possible dimension of a game to just a few simple options, but the number of dimensions and the freedom to connect up all sorts of pieces together is what allows for such a vast universe of permutations.

Ultimately, the goal has always been to create a system somewhat akin to Snapmap for DOOM, which coincidentally follows in the same vein as my original vision for Bitphoria - a platform for people to easily create and share their own mods/games. Snapmap's idea of a WYSIWIG editor trumps my weary plan for users to work in (relatively) tedious script files. Admittedly I was somewhat awe-struck and simultaneously irked when id Software unveiled DOOM at E3 2015, after I had already been working on Bitphoria for a year. "They stole my idea!"  I'm just glad Snapmap was the last project on DOOM that Carmack worked on before his departure, as it was born of the community modding spirit which he was always such a proponent of.


Editing some logic gates in DOOM's Snapmap.


Anyway, that's pretty much that. There's also a bunch of things I need to revisit, such as the post processing effects system, and the windmapping stuff, as these really sopped up any remaining CPU and GPU that was previously left on the table. In the case of running in VR, however, they seem to push the envelope a bit too far to be viable - at least on my bare-minimum VR spec system. I'll have to buckle down and really push to keep certain things in there. Screenspace reflections? Likely out of the question now, but maybe I can hack something to work that provides a similar effect. It was always more of an aesthetic thing than an aspect to make Bitphoria have more visual realism. Particles and wind fluid dynamics perhaps could be moved to the GPU, but we'll see. They might just be effects reserved for the highest of system specs.

I should be able to re-engage at least some minimal post processing effects. My FXAA implementation was pretty solid, and surely would be faster than supersampling, and possibly faster than multisampling. I'll just have to see for myself. The postfx system also was responsible for final color adjustment, and also featured a really cool spectral tracer effect which particles and entities could leave overbright residual trails across. It was subtle, but it really accentuated the whole aesthetic and feel, making certain objects seem like they were really glowing blindingly bright, such as lasers and explosions. The windmapping really lent itself to the overall feel as well. Maybe I can figure out some kind of layered screenspace/frustumspace fluid dynamics solver, which would project onto the scene when particles and entities query the windmap. It was always purely a visual effect that wasn't intended to be used to actually affect gameplay-relevant entities, and it really gave a whole new dimension to the feel of Bitphoria. I miss my wind.


Conclusion:

Now I have a customer base with PixelCNC, customers who invested in it as early-access software with the promise of new features coming down the pipe. I owe it to them for their support to continue staying focused, and productive, on PixelCNC. As far as I am concerned, any supporters who put a financial investment in something should take precedence over anything else I may have going on.





1.24.2018

PixelCNC: Images to CNC G-Code


Sometime early last year I'd finally won my wife over with the idea of making and selling various CNC milled/routed items on our Etsy store - where we've been selling crafts and prints for years. She's developed her own process for creating designs which I would then run through my process to produce a final product on my tabletop CNC.



However, this process was somewhat cumbersome and tedious, involving meshing the image using Blender (which could take a while when applying a decimate operation to get the polycount down to something workable) and then fiddling around in a conventional CAM program to actually generate toolpaths. The whole process was a very tweaking-intensive operation, requiring constant refinement and adjustment, which consumed more time than I thought should've been necessary. Isn't there a way I can just get from an image to G-code?

To improve the process I (apparently) wrote a program, 'TGA2STL' (https://github.com/DEF7/TGA2STL), early 2016, before I had convinced my lady of the profitable nature of the CNC - which was mostly sitting idle while I worked on my game engine Bitphoria. I had completely forgotten about TGA2STL until I stumbled across it in my projects folder a few months ago. I was surprised at both my thoroughness when developing it, and total forgetfulness at the fact I had created it at all. At any rate, it became a part of my new process for converting my wife's designs into finished products. But I was still left at the mercy of whatever toolpath generation software I had at my disposal: however tedious or uninspired they may be. I knew I could do better!

A project my father had always encouraged me to work on was a CAD/CAM package to undercut the professional packages that cost thousands of dollars, and that 90% of job shops only use 10% of the functionality from. My dad's big idea was for me to write a CAD/CAM program that only featured that 10% of functionality for all those shops, and sell it to them for the low price of $500. It was a project that always interested me from an algorithmic engineering standpoint, though it was never enough for me to drop existing projects to work on it.

Now, my father's original idea was a professional CAD/CAM program that could be used for precision machining and as awesome as that sounds I don't have a professional CNC, and I've also ventured into CNC on my own from more of a creative/hobby/artistry angle. I don't really have a personal use for a professional CAD/CAM program that I'd be making myself - aside from selling it for money. In spite of these sorts of projects being a bit of an arcane art form that I *could* spend years on, easily, I'd rather not. This is especially the case considering how many subscription pro-level CAM packages that exist nowadays, it would ultimately exist as just a mental masturbation project of sorts. Alternatively, I do have a use for a program that can take my wife's designs and turn them into G-code on the fly. In other words, I do have a use for a sort of hybrid project that merges my dad's idea for a bare-minimum CAM program and the workflow my wife and I would prefer to use, in terms of our CNC endeavors.




Enter PixelCNC...

I've been working on PixelCNC since the end of last summer, and it's finally released. You can check it out at http://deftware.itch.io/PixelCNC/ It's available as an early-access program, selling for $55.00, but there's a free trial version that can do all the same stuff, except for load images with > 65k pixels and load/save project files. Hopefully that's enough to get people hooked on it without giving it away for free.


PixelCNC generating a 'horizontal milling' operation.


7.05.2017

Long See no Time


It's been 5-months since my last blog post! In all actuality I simply burned out there for a while. I did manage to finally upgrade my CPU from a dual-core to a quad-core, allowing much better testing and enhancement of Bitphoria's multi-threading system. Many things have been added to Bitphoria since my last blog post.

I finally decided to add in a world-wide fluid dynamics simulation system with LOD to minimize computation. This effectively serves as a sort of 'windmap' for the world - and allows objects to either drag the 'air' around, pulling other objects, dynamic meshes, and particles around. Entities can also cause a momentary change in pressure at their location, for blast or black-hole vortex style effects that actually affect smoke and entities surrounding them. Rockets can now leave swirling trails of smoke, and cause particles to wisp around as they zoom through a cloud of them. It's pretty neato, but purely a superficial effect. It's something I've wanted to implement into Bitphoria since I first sat down and sketched out some ideas I wanted to see before I even started writing the engine.




I've also added signed distance function primitives to the procedural modeling stuff for easily constructing polygonal, wireframe, and point cloud geometries for entities - CSG style. This has made it much easier to model more interesting entity appearances, and now scripters aren't forced to plot individual vertices to create triangles.

Instead of explaining it much further I'll just show you a bunch of development screenshots from when I was working on the SDF modeling stuffs:

When things were first starting to work: plotting points for individual SDF voxels - sized according to the 'density' of the voxel they represent.

Generating a point-cloud only on the surface where the voxels are zero distance from the surface of the final distance field as produced from a cube added, a ring of spheres subtracted, and a single green sphere added to the top.

Utilizing the same voxel triangulation code to yield triangle mesh geometry from a SDF model.

In my attempt to add the ability to smooth an isomesh per the distance field, a lot of things were going wrong (these are supposed to just be smooth spheres).

A sphere that hasn't been smoothed, showing the base isomesh.

Some different tests: a smoothed cone (with crooked tip, this has since been fixed), a purple sphere with a pink capsule merged/added to it, as well as a green sphere with an orange capsule 'blended' into it (note the smooth transition between surfaces) while also being smoothed properly. You can smoothly blend/merge primitives while still producing a coarse isomesh with 45 degree angles.

A ring of spheres blend-merged and the result smoothed over.

Testing a different player appearance, using various things, while having the shield powerup - a set of overlapping point cloud spheres that all spin independently by rolling around against the surrounding world surfaces while the player moves.

A screenshot of the last of the testing/development game before I started scripting a new base game from scratch.

A bunch of other things have been added to Bitphoria as well, but I haven't touched it in at least 2 months and it's currently not something I'm particularly motivated to pursue. After making all the progress that I did on Bitphoria a few months ago I began scripting a base/default game that I would then use as a template for creating various game modes to release with the next version.

The base game is a sort of deathmatch game with simple AI drones and obstacles and hazards for players to negotiate while battling it out with one-another, which always seemed more interesting to me than just raw PvP deathmatch gameplay. Anyway, I just didn't find working on it rewarding anymore. I can come up with hundreds of little ideas and mechanics, knowing how to go about implementing them by exploiting the capabilities of Bitphoria's scripting system, but it just doesn't excite me like it did when I was younger. Back then I was working with the Quake engine. I'm sure scripting stuff in Bitphoria would be a blast for many other young spry minds out there, but I'm not of a mind to seek those kids out, even though they were what I was thinking of when I designed the whole thing.


Showing the effectiveness of Bitphoria's new FXAA post-processing shader at smoothing out super-jaggy aliasing on stair-step edge pixels. One of the many new things added to Bitphoria over the last while.

My goal has always been to make enough interesting stuff to show off Bitphoria's capabilities as a platform for creating, sharing, and playing custom games with other people - and hopefully have it be inspiring enough to motivate people to engage their own creative minds within the paradigm Bitphoria's scripting system provides. Well, as it stands, this probably won't be happening any time soon. I've had to make my peace with this fact over the last few months. I've been struggling to allow myself to work on anything else or pursue any of my other passions, not berating myself for letting Bitphoria development go idle. My resolve has been to look at this situation knowing that I owe it to myself to do what I must to take care of my own mental well-being and *let myself* pursue other projects and passions because nothing good comes about from sitting around not working on anything else purely out of guilt.

Yes, I wish that I could knock out Bitphoria in "record-manic-stay-up-all-night-not-caring-about-anything-else-in-life" time, which was naively my plan from the beginning, but it's just not in the stars. Am I lazy? Eh.. But if that were the case I don't think that I'd be feeling like there's not enough time in the day to work on what I *do* want to work on, especially after having overcome the self-inflicted shame I'd been enduring. I was tempted to release Bitphoria completely FOSS, just dump the code on the interwebs, and abandon any and all aspirations of trying to monetize it. I'd just be giving all of my work away for free. Alas, me lady talked me out of it, and explained that I should just let it sit until I was ready to come back to it. So that's what the plan is.


Bitphoria in its current form.

I've always been excellent at arcane technical pursuits and hacking away at them into the night, even now at thirty years old. But as far as actually designing a fun game or dealing with PR and promotion are concerned I am seriously lacking in drive and/or spirit. With recent developments I've become more inclined to focus on keeping my creative spirits high and working on what I love to work on: tackling difficult algorithmic problems. I've pretty much resigned to being the Wozniak to someone else's Jobs. I haven't met my 'Jobs' yet, and I hope I do someday, because I think that I have a lot to offer to and share with the world, and lack the ability to really get it out there.


The good old days.

In my relative slump I did manage to muster the gumption to start playing around on my CNC once again - the product of yet another 'abandoned' project that I had begun feeling guilty about for allowing to sit untouched and unloved for so long. It's really nice to have something to work on with my hands though :D

I've since explored a few ideas and have somehow finally convinced my wife that it's a financially worthwhile pursuit - making stuff on the CNC - which doesn't require dealing with nearly as many customers as our current crafting products do with our online business. We could be selling fewer big-ticket top-dollar high-end CNC-milled items rather than many cheaper smaller decorative items. In other words we could be making more money for less work, and deal with less customers, if we both transitioned our business toward producing large quality works as a team.

It would definitely be nice if we could spend more time together again like the old days, and I see CNC projects as being the nearest of several keys to unlocking that future for us, but it must be as a team. I don't believe she's the Jobs to my Woz, but I do believe we have the potential to achieve great things together. It has worked thus far with our online business, and I feel that she's fully capable of meeting me half-way while we engage a new medium together.

I've also been sketching out and outlining some ideas for an old project my late father had proposed and actively tried to encourage me to pursue. I'll save the details on that for a later blog post.



2.06.2017

Post-Processing Shaders and Effects




Bitphoria has had a simple post-processing effects setup in place for some time now. This comprised a single framebuffer object with a color and depth texture attached to it, which would be read by a single fragment shader on a full-screen quad in order to generate a simple mipmap blur and contrast boost. The aim here was to just achieve a very simple and basic effect beyond what simply rendering the scene straight to the screen/window could achieve. This could not perform multiple post processing passes or effects that required multiple shader stages.

I've always had the nagging sensation that something's visually missing from Bitphoria, beyond the general lack of coherent visuals - which I attribute to not having sat down and actually designed a cohesive appearance to a game via the engine's scripting functionality. With the ability to add various post processing effects to Bitphoria I feel that a more coherent visual aesthetic can be achieved beyond what one simple post processing shader offered.

For the new post processing effect system I wanted to be able to easily add more shaders, and route their inputs/outputs between them. At the beginning of each frame a 'raw framebuffer' is bound that has three texture attachments: RGBA color, XYZW reflection vectors, an HSL 'overbright' emission texture, and of course a depth texture. The reflection vectors texture is generated by all of the shaders used to render world geometry, procedural models, etc.. If a surface is not supposed to be reflective this is indicated by a reflection vector facing the viewpoint (Z = 0). The HSL overbright emission texture is for the 'spectral trails' shader effect, which creates a sort of quickly-fading 'rainbow' trail behind the objects that write to that texture when rendered.

Objects leaving spectral trails on the screen. This can be annoying and so has been toned down and is mostly used sparingly for momentary effects like explosions.

At the end of the frame the postfx system is executed, which then goes ahead and renders a fullscreen quad for each registered shader effect - binding all necessary textures and setting all prescribed GLSL uniforms for each.

Creating an effect stage involves loading its shader, creating an FBO, and attaching a single color texture to the FBO to be used as the effect's "output". A depth texture is also created and attached to the FBO only for what's called "FBO completeness", even though the full-screen quads do not convey any depth information. Still, a shader effect *could* generate depth values that would be written to the depth textures using gl_FragDepth, and that depth texture could be utilized by succeeding effect stages.

From there shader uniforms can be added from the engine with "pfx_parm()", where ints, floats, vec3s and mat4s (4x4 matrices) can have a pointer to their memory stored and used to set any necessary values the effect's shader may require.


Bitphoria's current postfx shader pipeline. Green boxes are texture outputs from effect shaders.


Similarly, "pfx_input()" is used for referencing other effects, allowing their FBO color or depth textures to be bound when the current effect is being rendered, for routing the output of stages as inputs into others. Some effect shaders will use the FBO texture output of previous stages (or the output of later stages from the previous frame) as inputs for certain effects.