The Memory of AR Apps can be boosted by this new platform.

 The Memory of AR Apps can be boosted by this new platform.



SPILLING OUT INTO THE MIDDLE OF THE AIR. A table topped with a package of Lego bricks. Put on your fictitious augmented reality goggles and join me. All of your bricks in front of you are cataloged by an AR camera, and you'll be presented with ideas for new models based on the components you have available. The doorbell rings, and it's...someone. You leave and then return to see if it's there. Fortunately, none of those components will need to be rescanned by your glasses. As far as the AR is concerned, they're right there where you left them

.


Perceptus, a new augmented reality software platform from Singulos Research, is built around the idea of continuously recalling scanned real-world things. Even if the camera is no longer staring directly at the scene, Perceptus retains those objects in its memory. The Perceptus platform kept thinking about what more you could build with the pieces on the table while you walked over to answer the door. It didn't cease working just because you were no longer focusing on the puzzle.

There is no need to gaze at the entire room at once in an AR environment, says Brad Quinton of Singulos Research, CEO of the company. The idea that there are things out there that we can't see because we've seen them before and remember them is not a problem for us as humans." With AR that understands what is going on around you, it can go and do things for you."

That, at the very least, is the plan. Developers currently use Apple's ARKit and Google's ARCore to create AR apps. Perceptus functions as a layer over these technologies. There are many steps that must be taken before this works on your mobile device.

Singulos Research receives 3D models of the Lego bricks — or any other object — from the app developer. As part of a machine learning process, the platform analyses how the object would appear in various lighting circumstances, on different surfaces, and so forth. To make use of this new object comprehension, Perceptus is then applied to the developer's app. As with our hypothetical Lego app, the developer's task is to ensure that the program genuinely suggests things you can build with the bricks it identifies.

There is still a great deal of manual labor involved in the scanning and identifying of objects. In order to get started, Perceptus platform licensees will be required to submit computer-aided design models of the objects they wish to have the platform remember. The CAD models will be added to Singulos' library so that future developers can more easily search through the digital stacks to discover what they're looking for. There are already "huge quantities of very precise 3D models" from video game producers that Perceptus will be able to detect in the near future, according to Quinton.

There is no need to send image data to a cloud server for analysis because the platform is trained to identify specific things well before you open an AR app that might use them. Locally installed on the device, Perceptus can be used with any current mobile processor with no issues. It's impressive to see it in action. I watched as Quinton pushed an iPad closer to a table full with Lego blocks, and the camera began detecting all the forms and their colors in real time. Even though there were a few parts missing, it was still extremely near.

The chess demo the business created, which I used to virtually play against Quinton, was even more remarkable. It was a white-only checkerboard, so he pointed the iPad's camera there. A piece moved on the illustrated board running in a browser tab on my computer screen as he moved a tangible piece on his board. As soon as I made a move, a virtual black piece appeared on his board and followed my instructions. Seeing this game through an iPad's screen is strange, but when you imagine it while wearing AR glasses, it makes a lot more sense.

A major goal for Perceptus is to make augmented reality devices that can be used on a wide range of platforms, such as Apple products, smartphones with Qualcomm Snapdragon chips, and even Google-powered devices, such as the future wave of AR headsets from these firms, possible, adds Quinton. It should be simple to adapt to various augmented reality systems.

"The interplay between virtual and physical worlds is what I find neatest about this," Quinton explains. "We have this metaverse-y thing that isn't real—there aren't any [chess] pieces here, but we've constructed this new reality. It's not difficult to envision a scenario in which you have a chessboard next to you and use this software. A physical reality in which both of us exist but which neither of us can access is formed."

There is an advantage to this technique, argues Matthew Turk, a computer vision expert and the president of the Toyota Technological Institute in Chicago. Machine learning algorithms don't require you to take a ton of photos of an object or have individuals search for thousands of photos on the internet. Although Turk thinks it's a good solution for AR apps that need a physical component, it may not be suitable for general-purpose AR.

It is impossible to have a CAD model of everything that comes into contact with you, according to Turk. This is a small set if they're only looking at things for which you already have CAD models, but that set can grow over time as more libraries are made available." For most people, that's not enough, but it's enough for a number of fascinating uses."

We're still a long way from a world where you just point your AR glasses at something and they know exactly what you're looking at, but this is a good starting place for us to get started.

Post a Comment

Previous Post Next Post