With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
When I've made an animated UDSZ, at what framerate will the animation be rendered in QuickLook? Is it the same across all devices? (iPhone, Apple Vision Pro, etc.) and viewing environments? (QuickLook, inside an ARView, etc.)
Suppose I export my file at 30fps and the device draws at 60fps, does the device interpolate between frames automatically, animate at a lower frame rate, or play it at twice the speed? What if it were 24fps?
My primary concern with understanding frame rates is a bit of trouble I've had making perfectly looping animations. There always seems to be the slightest stutter between iterations.
Thanks in advance for any insights you're able to provide!
How do you call the effect where the edges around the central image gradually become transparent? This effect is also seen when viewing immersive mode of spatial photos in Vision Pro. How can I achieve this effect using SwiftUI or ShaderGraph? I want to use this effect when displaying images in my app.
I have discovered that RemoteImmersiveSpace is limited to utilizing the structure of the CompositorContent protocol, precluding direct invocation of RealityView. Consequently, I am interested in understanding the appropriate method for integrating CompositorContent within RemoteImmersiveSpace. Thanks.
My VisionOS App (Travel Immersive) has two interface windows: a main 2D interface window and a 3D Earth window. If the user first closes the main interface window and then the Earth window, clicking the app icon again will only launch the Earth window while failing to display the main interface window. However, if the user closes the Earth window first and then the main interface window, the app restarts normally.
Below is the code of
import SwiftUI
@main
struct Travel_ImmersiveApp: App {
@StateObject private var appModel = AppModel()
var body: some Scene {
WindowGroup(id: "MainWindow") {
ContentView()
.environmentObject(appModel)
.onDisappear {
appModel.closeEarthWindow = true
}
}
.windowStyle(.automatic)
.defaultSize(width: 1280, height: 825)
WindowGroup(id: "Earth") {
if !appModel.closeEarthWindow {
Globe3DView()
.environmentObject(appModel)
.onDisappear {
appModel.isGlobeWindowOpen = false
}
} else {
EmptyView() // 关闭时渲染空视图
}
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
ImmersiveSpace(id: "ImmersiveView") {
ImmersiveView()
.environmentObject(appModel)
}
}
}
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS.
https://developer.apple.com/design/human-interface-guidelines/widgets
My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
I am experiencing an issue where USDZ files exported from Blender do not display textures when opened in Apple Vision Pro Quick Look. Instead of the expected materials, the model appears completely white, as if the textures are missing or not being recognized by the rendering engine.
Topic:
Spatial Computing
SubTopic:
General
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output.
What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal?
Thanks!
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced.
May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
I saw at WWDC25 mentions of visionOS 26 now providing hand tracking poses at 90hz, but I also recall that being a feature in visionOS 2.
Is there something new happening in visionOS 26 that makes its implementation of hand tracking "better"?
Topic:
Spatial Computing
SubTopic:
General
prefetching logic for UICollectionView on VisionOS does not work.
I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic.
in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable.
on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices.
With the same code it behaves correctly on iOS devices
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
UIKit
visionOS
iPad and iOS apps on visionOS
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Hi guys,
In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness.
However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness.
I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value.
My question is:
Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture?
Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit
Expected / Previous:
Application builds and runs on device, working as described in the documentation.
Reality:
Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen.
This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g)
Is anyone else experiencing this issue?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred.
Here’s the test code:
struct ContentView: View {
var body: some View {
VStack(spacing: 20) {
Text("Above the RealityView")
.font(.title)
RealityView { content, attachments in
if let text = attachments.entity(for: "2dView") {
text.position.y = 0.1
content.add(text)
}
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
content.add(box)
} attachments: {
Attachment(id: "2dView") {
Text("Above the Box")
.font(.title)
}
}
.frame(width: 300, height: 300)
.border(.blue)
.blur(radius: 99) // Has no visual effect
Text("Below the RealityView")
.font(.subheadline)
}
.padding()
}
}
My question:
How can I make .blur(radius:) visually affect the content rendered in a RealityView?
Can you provide a working example that .blur() to visually affect any part of a RealityView?
Thanks!
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type.
It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error:
"audioInputMode' is unavailable in visionOS
audioInputMode' has been explicitly marked unavailable here
RealityFoundation.VideoPlaybackController.audioInputMode)"
https://developer.apple.com/documentation/realitykit/videomaterial/
Code:
let player = AVPlayer(url: url)
// Instantiate and configure the video material.
let material = VideoMaterial(avPlayer: player)
// Configure audio playback mode.
material.controller.audioInputMode = .spatial // this line won’t compile.
VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2.
The videos are HEVC MPEG-4 codecs.
Is there any other way to do this, or is there a workaround available?
Thank you.