If you aren’t an Android developer, you may not have heard about ARCore in a conversation. The ARCore SDK (Augmented Reality Core Software Development) platform was used to create the augmented reality experiences you’ve had in many Android and iOS apps. With over 1.4 billion AR-compatible smartphones across the planet, Google’s ARCore is a leading platform for augmented reality development.




The best Android phones, including Google Pixel phones, have the hardware capabilities to support AR, but that wasn’t always the case. Any mobile device that uses AR must have a good camera, specific sensors, and enough processing power. For AR to work, your device needs to track motion, estimate lighting, and have a basic understanding of the environment.

Related

Best photo apps and games to take amusing shots through AR

Looking for meaning while snapping photos? Try out augmented reality games and apps to liven up your Android gallery


How ARCore integrates digital objects into the physical world

If you’ve ever used Google Lens or Google Maps Live View, you’ve used AI-powered tech developed with ARCore. Google officially released its AR development platform as a competitor to Apple’s ARKit in March 2018, making AR development available to anyone who can run the SDK.



Augmented reality is distinct from virtual reality, even though they can be used together. There are three core functionalities. AR apps detect and understand things in the physical environment, use light and shadows that look similar to reality, and keep every digital object anchored in place when moving your phone around. Let’s take a look at each and how they work.

Motion Tracking: Augmented reality uses visual-interial odometry (VIO) to look at the environment (visual), estimate the velocity of an object (inertia), and the change of position over time (odometry). Data from your phone is used to place you and digital objects in the environment and keep those positions accurate if you or your phone moves.

To pull this off, your camera data is combined with data from your accelerometer and gyroscope. Your camera is used for feature detection, mapping things like edges, corners, textures, tabletops, floors, people, and different visual points of interest.


Your accelerometer and gyroscope calculate your device’s position and orientation, so your phone knows where all the mapped features are, even if they’re temporarily not visible or your phone is turned at an angle. Continuously tracking the movement of these things in relation to your camera’s position keeps objects in the locations they should be.

Related

The best augmented reality games for Android in 2024

Give yourself an excuse to go outside with these augmented reality games for Android

Light Estimation: In many user experiences, light estimation can be subtle or go unrecognized. Your device maps the location of light sources, and that data is used to create realistic lighting, shading, shadows, and reflections for digital objects. The light detection and ranging (LiDAR) sensor in your device is one of the sensors used for this. If it’s done well, this makes a difference in the believability of that digital object being in the real world.


If you’ve ever played or seen Pokémon GO, this can be as simple as creating shadows for the Pokémon, making it look like they are standing on the ground or floating in the air. Without these shadows, it would be harder to tell what their position really was. It would look like they were pasted into a picture rather than in a 3D environment.

Environmental Understanding: Aside from lighting, there are three primary things that ARCore needs to understand about the surrounding environment. One of the most important is plane detection. For an object to look like it’s in the real world, especially if it is on top of something, it must look like it’s sitting on that surface accurately. Plane detection looks for points that form horizontal or vertical planes, like walls, floors, or tables.



Point clouds are also generated, where detected features around you receive a 3D point, creating a dataset with several dots representing different types of data. This is like the points on a person or animal you may have seen when doing motion capturing for movies, but there are tons of them placed on everything around you. When a plane or specific point is mapped, ARCore developers can place anchor points, which fix their virtual object where it’s placed. That way, it will be where you expect it to be, even if you move around a lot.

Much of this information is collected through light detection and ranging (LiDAR) and depth mapping, both of which scan the environment and send depth estimations between different points in the environment and your phone’s camera. In other words, it’s a map of the distance between your phone and various objects in the environment that ARCore is looking for. This creates a kind of visual 3D memory of the environment and tracks changes in the environment that are captured by your camera or sensors in real time.



Augmented reality still has a lot of room for growth

The newest feature in ARCore is its Geospatial API, which leverages the information Google Maps and Street Views collects and allows developers to use it to improve their apps. This is a great feature for geocaching, geo-based games like Pokémon GO, and mapping information anchored to specific places in the world. For example, you could point your phone’s camera at a restaurant and see a virtual menu in front of you. Or, while walking through a botanical garden, you could point your phone at a plant to see a tag with the plant’s name and other information about it.

With increasingly accurate sensors, better cameras, and more computing power, augmented reality is on track to become more than placing objects that look digital in an environment. ARCore has helped promote growth in augmented reality development and will continue to help developers explore different use cases. Before long, when you’re online shopping and use AR to see a product in your room or try on some eyeglasses, you’ll see more accurate dimensions and seamless integration into the real-world environment.