Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
1
0
298
Aug ’25
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
1
0
528
Aug ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
191
Jun ’25
iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
1
0
1.1k
Sep ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
1
0
590
Sep ’25
how to transition between spatial3d to spatial3DImmersive?
Hi, When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image. Thanks.
1
0
871
Nov ’25
Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
1
0
218
Sep ’25
openImmersiveSpace works as expected but never returns Result
As in the title: openImmersiveSpace works as expected. The ImmersiveSpace in the ID opens normally but the function never resolves a Result. Here is a workaround I used to make user it was on the UI thread and scenePhase was active: ` @MainActor func openSpaceWithStateCheck() async { if scenePhase == .active { Task { switch await openImmersiveSpace(id: "RoomCaptureInteraction") { case .opened: isCapturingImagery = true break case .error: print("!! An error occurred when trying to open the immersive space captureRoomImagery") case .userCancelled: print("!! The user declined opening immersive space captureRoomImagery") @unknown default: print("!! unknown default result of opening space") break } } } else { print("Scene not active, deferring immersive space opening") } } I'm on visionOS 2.4 and SDK 2.2. I have tried uninstalling the app and rebuilding. Tried simply opening an empty ImmersiveSpace. The consistency of the ImmersiveSpace opening at least means I can work around it. Even dismissImmersiveSpace works normally and closes the immersive space. But a workaround seems hamfisted.
1
0
87
Mar ’25
window loses resize feature on reopening
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed. The code is something like this: WindowGroup(id: "image-display-window",..... } .windowResizability(.automatic) .windowStyle(.plain) I call dismissWindow() from the window view and it is dismissed correctly. Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize. Anyone knows how to fix this? Thanks.
1
0
269
Dec ’25
How to handle tasks when the Vision Pro is taken off?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on. I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server). I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
1
0
304
Mar ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
298
Sep ’25