Limitations of RealityKit 1.0
Earlier this year, client after client asked: “What is RealityKit? Should we be building our app using that?”
The answer to the first question is simple: RealityKit is Apple’s new framework, announced in June 2019, for building augmented reality applications. The framework helps developers handle AR rendering, animation, physics, and audio.
The second question is monumentally more difficult to answer. Every situation is unique, so I would dig into what their product needed and see if RealityKit was a good fit. I’d learn more about the ins and outs of the features I hadn’t previously explored. I’d scour GitHub repos, and article examples, and projects posted online built on RealityKit. I’d review RealityKit projects I’d open-sourced.
But as of June 19, 2020, each time I was asked whether RealityKit should be used in production, my answer was: Nah.
When I wrote about Building with RealityKit for the First Time, I wavered on how to interpret its current limitations. Are the framework’s rough edges indicative of Apple placing a forceful restraint on the design of future AR apps? Or are the feature limitations and copious constraints merely symptoms of dealing with an early 1.0 release?
I’m certainly hoping for the latter. I much prefer an outlook that brings new features and capabilities to RealityKit at a steady cadence. Preferably, some of the limitations listed below will be addressed by the release of RealityKit 2 (if they’re going to follow ARKit’s release nomenclature). We’ll have a much better idea of where the framework is headed in mere days since WWDC is next week.
In the meantime, let’s dive into the various showstopping issues that left me running away from RealityKit and back to SceneKit (Apple’s older 3D framework) time and again.
No Concave Models
One client wanted to build a multiplayer AR basketball mini-game. I haven’t seen a compelling turn-based version of basketball, so we’re talking real-time physics and multiple players shooting on the same hoop at the same time.
I got started. I created a barebones RealityKit project and jumped into Reality Composer to throw up a hoop in my living room. I cobbled together a sad-looking hoop placeholder model just fine, but when I threw an augmented reality ball at it – the balls bounced off an invisible force field surrounding the hoop before getting anywhere near it. After I double checked my physics code, collision bitmasking, and projectile (i.e. ball) geometry, I turned my attention to the hoop. It’s a torus all right, but testing demonstrated an extraordinarily crude bounding box was being used for physics calculations.
Picture a basketball hoop and backboard. Now picture you put the entire thing in a big ‘ol cardboard box. Now picture you throw around that box a bit and give it a few kicks. You now have some weirdly shaped, dented, quasi-rectangular-prism – and you also happen to have the shape RealityKit was using for physics calculations involving my basketball hoop.
Picture that weird box with the hoop and backboard inside one last time. Without opening the box, how do you successfully score a basket on a hoop that’s packaged in a box? I couldn’t figure out how to defy that bit of basic physics either, so RealityKit was not used on that product. I instantly turned to SceneKit to build the AR basketball game and didn’t look back.
Later when doing a project post mortem, I discovered RealityKit is unable to calculate physics-based interactions of 3D models that have holes in them, such as a hoop. This total lack of support for concave models was mind-blowing to me.
I started learning how to code in 2014 when Swift was announced, so I wasn’t around for the early days of iOS. I couldn’t help but wonder: Was there a time when SceneKit couldn’t handle a hoop? Perhaps, or perhaps there’s some good reason why something so fundamental isn’t possible in RealityKit 1.0. Bafflingly, at launch one of few 3D models to play around with in Reality Composer was in fact a basketball hoop – a basketball hoop that couldn’t be used to score baskets. Either way, improved physics modeling which includes support for concave models sounds like an excellent place for RealityKit 2 to start, and indeed catch up with what I expected a year ago in the launch version.
Most games and AR apps have hidden secrets. There’s almost always some invisible geometry that blocks you from seeing something behind it; or that ensures the UI is always displayed in certain situations; or that acts as a hitbox. But not in RealityKit! You’ve got to figure out an even-hackier way to get the product to do what you want it to do, because there’s no fading or hiding or manipulating transparency here.
Or you can switch to SceneKit and adjust a node’s transparency in less than five seconds.
The only objects you can fade in RealityKit are primitive shapes such as cubes and spheres. I haven’t seen many compelling AR apps that solely use cubes and spheres. While you can completely hide objects, doing so prevents any scene-based interaction with the hidden geometry – eliminating the possibility to programmatically create hitboxes. Support for model transparency adjustment is another prime candidate to sneak into RealityKit 2 since it's relatively basic functionality.
Creating 20-foot virtual screens in augmented reality is a concept I’ve explored time after time after time. I think AR will eventually kill multiple “electronic screens” industries worth hundreds of billions of dollars.
It follows that I’d try to add some sweet floating video screens to a RealityKit project. Except, you can’t (without some very hacky and fragile code).
Time to once again cue up some SceneKit mixed with SpriteKit until RealityKit adds this capability.
One client uses AR to visualize beautiful, artisan wallpapers and tiles on your walls. What really made the augmented reality materials shine and look ultra-realistic are little bits of code called shaders. To oversimplify, shaders influence how a surface is displayed on-screen. There’s an entire shader subculture, but suffice it to say if you’ve seen something especially graphically impressive – there’s probably a shader behind it.
But, (surprise!) there’s no support for shaders in RealityKit. So if you need shaders, go back to SceneKit. Again.
Although, having only recently experimented with shaders, this omission was not that surprising to me. The various ways you can implement shaders in various languages in SceneKit left me a bit perplexed. Given the mature-yet-convoluted shader situation in SceneKit, I would be (pleasantly) surprised to see official shader support in RealityKit soon. Shaders, while amazing and truly product-elevating when done right, seem lower priority when we don’t even have basic functional physics right now.
And the Rest
There’s more functionality that’s been baked into SceneKit for years but is conspicuously absent from RealityKit, including: rendering order, texture tiling, and TextureResource transparency controls. This list is by no means exhaustive – it’s what I stumbled across when trying to build a handful of complex projects. There likely are hacks to get around some of these shortcomings, but there’s a world of difference between having native, built-in support and being forced to hack your way to a desired outcome.
Despite wanting to use Apple’s latest augmented reality framework for maximum future-proofing, RealityKit 1.0’s limitations persistently forced my hand to use SceneKit instead.
Reports indicate Apple’s first AR/VR device is still 2+ years away. I’m expecting this initial release of RealityKit we’ve had for the past 12 months to be quite feeble compared to future versions of the framework. Here’s hoping multiple complaints above are addressed at WWDC20 next week, and we can get down to the fun part of building complex augmented reality apps using RealityKit.
Want more? Read my last post about building with RealityKit for the first time.
Want to build some awesome AR thing? Reach out!