Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RealityKit Trace Metric Max/Range for VisionOS app
Hi Nathaniel, I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app. Thanks, Bob
3
0
154
Jun ’25
ReplayKit start and stop capture breaks and give me an error when switching from Immersive to Mixed and back.
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space: Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues) Step 2: Press the crown button to exit the immersive environment Step 3: Return to the immersive space subsequently Step 4: Attempt to start the video capture At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow. This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}" I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
3
0
340
Mar ’25
The folding and unfolding effect of the NBA sand table
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
1
0
86
May ’25
How to use defaultSize with visionOS window restoration?
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier. WindowGroup(id: "someID") { SomeView() } .defaultSize(CGSize(width: 600, height: 600)) Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases. Manual resize respected Leaving a room and returning later Taking the headset off and putting it back on later Manual resize NOT respected Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work. I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior. Questions What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration? Should we try to keep track of window size after users adjust them and save that somewhere? If this is intended behavior, can someone please update the docs accordingly?
1
0
457
Jul ’25
How To Move Camera with PS5 Game Controller
Hello again! So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!). This is my initial query to ChatGPT: "How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?" And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out: __ Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing. This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view. ⸻ 🔍 What’s happening Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless: 1. You explicitly assign your camera entity to be the one used, or 2. Your camera is the only one and RealityKit picks it up automatically. But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s: • properly set with content.camera, or • part of a .usda/Reality Composer camera hierarchy that’s already active. Unfortunately: 🟥 RealityView.camera = .entity(cameraRoot) ❌ Is no longer allowed (we confirmed this breaks with a type error) ⸻ ✅ The Reliable Workaround: Attach your camera to the Immersive scene itself Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward: ✔️ Option A: Embed your camera into the Reality Composer Pro scene 1. Open your Reality Composer Pro scene 2. Add a Camera entity directly in your Immersive.usda scene 3. Name it something like "PlayerCamera" 4. Position it at [0, 1.6, 0] 5. Leave it selected as the default camera Then in your Swift code: • Find the camera entity with .findEntity(named:) • Move that entity instead of creating a new one. ⸻ ✔️ Option B: Force RealityKit to use your custom camera (via transform parenting) Since direct camera assignment is unavailable, we can use a trick: Move the whole RealityView scene via a parent entity with a camera. But let’s go with Option A, since you’ve already got a Reality Composer Pro scene. __ Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over. If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.
8
0
800
Jul ’25
Gaze, Eye Tracking responsive Stuff
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ? For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ? Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
1
0
243
Mar ’25
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
3
0
1.1k
Oct ’25
Alternatives to SceneView
Hey there, since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation: I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths. What I tested: I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space. ARView in nonAR mode still shows the object like in normal AR mode without camera stream. I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). Model3D is only available for visionOS (that would be amazing to have it for iOS) I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app. Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS. Long story short 😃: Does someone have an idea what is the alternative to SceneView for USDZ 3D models? I appreciate your support!! Thanks in advance!
4
0
253
Jul ’25
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
5
0
430
Apr ’25
How to play blend shape animations or morph animations exported from blender in Vision Pro apps using Reality Kit
So I am exporting a .usdc file from blender that already has some morph animations. The animations play well in blender but when I export I cannot seem to play them in RealityKit or RCP. Entity.availableAnimations is an empty array. Not of the child objects in the entity hierarchy has an animation library component with it. Maybe I am exporting it wrong but I tried multiple combinations but doesn't seem to work. Here are my export settings in blender The original file I purchased is an FBX file that has the animation but when I try to directly get it in RealityConverter it doesn't seem to play animations.
2
0
204
Jun ’25
RealityKit Entity ComponentSet does not conform to Sequence?
Hello, I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app. if let appleEntity = try? Entity.loadModel(named: "apple_tile") { let c = appleEntity.components for comp in c { // <- compiler error here print(comp) } } The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet? Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is xcrun swift -version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) Target: x86_64-apple-macosx14.0
3
0
422
Mar ’25
Background Assets in VisionOS
Hi, I'm working on a VisionOS app and would like to integrate Background Assets to download large files after the app is installed. I'm wondering what would happen if the user takes off the headset while a background asset is being downloaded. Would it continue downloading or would the download be stopped/paused? I was looking for a way to download large assets while the user is not wearing the Vision Pro, is there any other alternative? Thanks in advance.
1
0
160
Jun ’25
Missing Properties in BillboardComponent
In an earlier beta, BillboardComponent had rotationAxis and upDirection properties which allowed more fine-grained control of how an entity rotates towards the camera. Currently, it is only possible to orient the z axis of the entity. Looking at the robot in the documentation, the rotation of its z axis causes its feet to lift off the ground. Before, it was possible to restrain the rotation to one axis (y, for example) so that the robot's feet stayed on the ground with billboard.upDirection = [0, 1, 0] billboard.rotationAxis = [0, 1, 0] Is there an alternative way to achieve this? Are these properties (or similar) coming back?
1
0
310
Mar ’25
Creating spatial video with one camera
Hello everyone I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example? I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
1
0
298
Jun ’25
RealityKit entity.write(to:) generates fatal protection error
My app for framing and arranging pictures from Photos on visionOS allows users to write the arrangements they create to .reality files using RealityKit entity.write(to:) that they then display to customers on their websites. This works perfectly on visionOS 2, but fails with a fatal protection error on visionOS 26 beta 1 and beta 2 when write(to:) attempts to write to its internal cache: 2025-06-29 14:03:04.688 Failed to write reality file Error Domain=RERealityFileWriterErrorDomain Code=10 "Could not create parent folders for file path /var/mobile/Containers/Data/Application/81E1DDC4-331F-425D-919B-3AB87390479A/Library/Caches/com.GeorgePurvis.Photography.FrameItVision/RealityFileBundleZippingTmp_A049685F-C9B2-479B-890D-CF43D13B60E9/41453BC9-26CB-46C5-ADBE-C0A50253EC27." UserInfo={NSLocalizedDescription=Could not create parent folders for file path /var/mobile/Containers/Data/Application/81E1DDC4-331F-425D-919B-3AB87390479A/Library/Caches/com.GeorgePurvis.Photography.FrameItVision/RealityFileBundleZippingTmp_A049685F-C9B2-479B-890D-CF43D13B60E9/41453BC9-26CB-46C5-ADBE-C0A50253EC27.} Has anyone else encountered this problem? Do you have a workaround? Have you filed a feedback? ChatGPT analysis of the error and my code reports: Why there is no workaround • entity.write(to:) is a black box — you cannot override where it builds its staging bundle • it always tries to create those random folders itself • you cannot supply a parent or working directory to RealityFileWriter • so if the system fails to create that folder, you cannot patch it 👉 This is why you see a fatal error with no recovery. See also feedbacks: FB18494954, FB18036627, FB18063766
10
0
545
Jul ’25
Imitating a grip on an object
I'm playing about with the hand tracking systems in reality kit / Vision Pro I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter) I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand. The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand. It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position. I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand. It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here? Note: HandTrackingSystem.handTracking is a HandTrackingProvider() guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else { return } if let thumb = anchors.handSkeleton?.joint(.thumbTip), let little = anchors.handSkeleton?.joint(.littleFingerTip) { let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3) let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3) let midPos = (thumbPos + littlePos) / 2 let direction = normalize(littlePos - thumbPos) let rotation = simd_quatf(from: [0, 1, 0], to: direction) wandEntity.transform.translation = midPos wandEntity.transform.rotation = rotation content.add(wandEntity) }
1
0
567
Jul ’25