In this age we need to think about things like voice control and 3D manipulation of data-structures and a dynamic view of the code.
We can truthfully keep designing 2D editors (and we will always most likely use them to some extent) but I believe it is more important to consider different UI paradigms altogether.
For instance, what about editing a living code environment? Game development is very immersive: you can manipulate a running environment and see results immediately. How can this be extended to other development tasks like server-side development?
What if, when you select a for loop from code fragment a 3D visualization of the programs data structures at that point is shown to the user. What if you can, instead of launching a debugger, run the debugger as you're writing the code and step forward and back and see these visualizations change?
We have all the technology. It's time to get to the next level.
Yes, but. Keep ripeness in mind. It's very easy to spend effort in this area "too soon". So shape projects with care.
I run a Vive on linux. Which required my own stack. Which took excessive time, and is limiting, but has some road-less-traveled benefit of altered constraints. So I've gone to talks, and done dev, wearing Vive with video passthrough-AR, with emacs and RDP, driven by an old laptop's integrated graphics. Yay. But think 1980's green CRT (on pentile, green is higher res). In retrospect, it was a sunk-cost project-management garden path.
There's been a lot of that. One theme of the last few years, has been people putting a lot of effort and creativity into solutions to VR tech constraints, only to have the constraints disappear on a time scale that has them wishing they had just waited, and used the time to work on something else. It's fine for patent trolls (do something useless to plunder future value), and somewhat ok for academics (create interesting toy), but otherwise a cause for caution.
So on the one hand, I suggest that if the center tenth of VR displays had twice their current resolution, everything else being current tech, we would already be starting on a "I can have so many screens! And they're 3D! And..." disruption in software development tooling. But that's still a year or two out. Pessimistically, perhaps even more, if VR market growth is slow.
In the mean time, what? Anything where the UI isn't the only bottleneck (work on the others). Or which can work in 2D, or in 3D-on-2D screen (prototype on multiple screens). Or is VR, but is resolution insensitive, and doesn't require a lot of "if I had waited 6 months, I wouldn't have had to roll my own" infrastructure. Or which sets you up interestingly for the transition (picture kite.com, not as a 2D sidebar, but generating part of your 3D environment). Or which can be interestingly demo spiked (for fun mostly?).
For example, I would love to be working on VR visual programming on top of a category theoretic type system. But there seems no part of that which isn't best left at least another year to ripen. Though maybe 2D interactive string diagrams might be a fun spike.
Very well put. I have to agree that these technologies are still blooming and there are uncertainties--especially in my mind with cost associated with buying devices. Hololense for example with a $3000.00 USD developer edition isn't exactly accessible to the general community! The next 10 years however will probably be pretty exciting in the AR/VR space.
What about voice-controlled programming though? I always thought it would be nice to voice-control my OS. Not specifically for a text-editor, but as a general interface to the OS. It would be nice to move these features out of the cloud and directly onto systems. But then again, a lot of companies (amazon, microsoft, apple) probably don't want to encourage reverse-engineering of their intellectual property. We definitely need open-source variants.
Better AI chips with lower power-consumption and optimization for these types of operations will hopefully usher in a new set of productivity-enhancing applications!
You might enjoy http://www.iquilezles.org/live/ where he live codes some ray-marching using some kind of opengl editor. It's not quite what you're talking about, since it's just running the opengl code, but you could imagine it going through some kind of compiler/visualizer pipeline like you're thinking about.
> We have all the technology.
> It's time to get to the next level.
Just because you can does not mean you should. Dictating code may be useful is you cannot use your hands, but that's about it. In all other cases there is no benefits of doing that.
Well consider if you use both dictation and typing simultaneously. Vi and kakoune are ergonomic because it requires minimum changes to hand positions. But if you added voice dictation like "toggle tab 2" or "toggle terminal" or "go next brace" etc, all while STILL typing my guess is efficiency would go up physical fatigue would go down.
Folks have created Dragon-based voiced-sound vocabulary for editor control. Voice strain is an issue, but you can load share with typing.
You can do Google voice recognition in browser/WebVR, but it's better at sentences than brief commands.
Another component, largely unexplored, is hand controller motion. The Vive's are highly sensitive in position and angle. Millimeterish. So imagine swype text (phone keyboard continuous sliding) with 6 DOF. Plus the touchpad (the buttons aren't something you'd want to use all the time). Keyboard+pad is mature tech. But wands-with-pads look potentially competitive, and (not yet available) hand-mounted finger tracking added to keyboard+pad might make for a smooth transition. Plus voice. The space of steep-learning-curve input UIs for professionals looks intriguing.
We can truthfully keep designing 2D editors (and we will always most likely use them to some extent) but I believe it is more important to consider different UI paradigms altogether.
For instance, what about editing a living code environment? Game development is very immersive: you can manipulate a running environment and see results immediately. How can this be extended to other development tasks like server-side development?
What if, when you select a for loop from code fragment a 3D visualization of the programs data structures at that point is shown to the user. What if you can, instead of launching a debugger, run the debugger as you're writing the code and step forward and back and see these visualizations change?
We have all the technology. It's time to get to the next level.