Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The big problem there is resolution and the "Screen door" effect.

Text has to be fairly large in order to be readable. We will need an enormous resolution and incredibly low dot pitch to solve those issues and make productivity tools possible.

This is the tech space where I dream of working. I have been designing features for developer teams in AR but without the hardware being available, it's all theory.



Yup! I cringed when I read the above comment and imagined working in Excel on current AR displays. Resolution is still an issue, as you've stated, and there are still heaps of issues in the realm of focal depths/focal rivalry when it comes to AR. Current displays have a set focal depth and people don't realize how much that can affect things.


There is work[1] to address focal issues, and obviously resolution will continue to get better. I notice the focal issues in VR racing sims when I glance between my mirrors and back to the track, you expect a shift in focal length and it hurts your brain when there is none.

I'm still excited to see where HMDs get to in the next 5-10 years, both AR and VR.

Oculus Research' 'Focal Surface Display'

1: https://www.youtube.com/watch?v=W7JjANVKINA


So I haven't tried the tech, but one of the main advantages of the 'light field' tech they've been developing is supposedly solving this focal length issue - i.e. near things appear at a different focal point than objects farther away. Can't say how well it works in practice though.


That and the whole "how do you make stuff opaque". All the demo show semi-transparent object being added to the real world. Making text legible when you can't fully control the background on which it is display is very hard. My best guess is that if they don't advertise that use-case it is because they don't really feel ready to be judged on it.


Magic Leap claims to be able to occlude real-world objects with their AR, not just semi-transparent "hologram" overlays. This is something I'd want to see for myself, but here's what Brian Crecente at Rolling Stone wrote about it:

> Miller wanted to show me one other neat trick. He walked to the far end of the large room and asked me to launch Gimble. The robot obediently appeared in the distance, floating next to Miller. Miller then walked into the same space as the robot and promptly disappeared. Well, mostly disappeared, I could still see his legs jutting out from the bottom of the robot.

> My first reaction was, “Of course that’s what happens.” But then I realized I was seeing a fictional thing created by Magic Leap technology completely obscure a real-world human being. My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.

If they really have that working, it's a huge advantage over systems like HoloLens.

https://www.rollingstone.com/glixel/features/lightwear-intro...


>>able to occlude real-world objects

Here's an application for drivers and pilots: smart sun-shade to block out bright lights (headlights), sunlight, or glints off surfaces (water).


Based on Magic Leap's explanation, I'm not so sure it would work for that.

> My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.

That sounds to me like some kind of light-field trickery where it puts an object in front of the background using the light field, but doesn't physically block the light. Instead, your brain processes it out because your visual model of the space has something in front of it.

I'm imagining this works sort of like the effect where you overlay left and right eye images that don't match, your brain sort of fades between the two of them because it can't decide what's there? Except instead of having that disagreement, both of your eyes say "This thing is in front" and that's how you see it"? It's hard to say from the kind of hand-wavey explanation.

If that's the case, I don't know that it would work for really bright glare. Either it might be a strong enough signal to overpower the light field, or the bright light scattering around your eyeball might still cause enough bloom to wash out your vision.


Furthermore, even if the technology DID somehow manage to trick your brain into not acknowledging the glare, the bright light would still be entering your eyes and causing damage to your retinas. It would be similar to the case of somebody who has nerve damage and doesn't feel pain, so they don't know that their hand is resting on a hot stove until they smell burning flesh.


If only we had technology that, when subjected to an electrical field, turned glass from opaque to translucent and were produced in sufficient quantities for applications such as airplane and building windows as to make this technology relatively inexpensive.


Easier said then done, just an LCD won't work for darkening. http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...


The occluder wouldn't be in the same focal plane as the real world objects you are looking at. It'll darken a vague area around the object you want to mask.


Probably I'm missing something but I don't see this as a hard problem to solve because LCDs work by controlling the transparency of a pixel to let trough more or less background light and OLEDs don't need the background light to emit color so combining both you can have controlled transparency behind the color emitting pixels.


Don't you control that by doing text recognition and always placing a frame behind? Let the user customize the color schemes in question (bright white + black text, or off white + off black etc).

If the background is a blue wall in your living room, you place a white framing layer behind the text you're looking at. If the real wall is black, you do the same thing. Makes no difference what color the wall is then.

Wall -> Frame Layer -> Text

Text recognition should be among their easiest chores (which is to say it's still not easy, it's on the lower level of difficulty in what they're trying to do).


The issue is that if you have a transparent display for AR, light from the background goes through it. There's no way to just "put a white framing layer behind it" because the light the display puts out is being added on top of whatever light is coming through from behind it. This is what happens with Microsoft's HoloLens headsets.

Magic Leap is claiming to have occlusion of background objects working, but hasn't really explained the mechanism. It sounds like it's some sort of "light field" trickery where they let the light through, but your brain knows there's a virtual object in front of it and mentally processes it out. Cool if it works, but I'll need to see this to believe it.


There is a Finnish startup around using highres displays in vr https://varjo.com/


Nice. I think the idea is that by tracking and moving with the eye, you don't need a high resolution display. You just focus the resolution you have at the center of the users field of vision where most of our visual acuity is located.


That would be foveated rendering. Google Research recently put out a blog post on a new foveation pipeline they're developing.

https://research.googleblog.com/2017/12/introducing-new-fove...


That's a solvable problem though - it's more a matter of cost and the pixel density will go up and the cost down over time. I don't see this a long term limitation of this tech. Have you seen the Pimax 8k VR headset for instance? https://www.kickstarter.com/projects/pimax8kvr/pimax-the-wor...


In reality pimax delivers 1440p per eye. Someone needs to calculate this, but I think even that would be orders of magnitude away from a few 2K+ displays projected in your FOV. I think technology is really far away from that for any price. Don't trust the hype.


It's probably solvable eventually, but the tech isn't there yet.

Companies can focus on the gaming industry first because it's tolerant of slightly degraded looking graphics and inability to render high fidelity text, and they can start selling headsets now to that demographic. A person writing code all day or looking at spreadsheets will not tolerate reading small text through a screendoor.


Agreed - if I could spin my experience as a classroom teacher into designing k12 AR productivity tools that would be an amazing career step.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: