hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events.
Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame.
We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames.
This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K.
When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution.
Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window?
If it cannot work that way, what is an alternative?
Problem Summary
After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds.
Environment Details
Operating System: iOS 26.1 & iOS 26.2
Framework: RealityKit
Xcode Version: 16.2 (16C5032a)
Expected vs. Actual Behavior
Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier.
Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience.
Steps to Reproduce
Create or open an AR application with RealityKit that uses particle components
Attach a ParticleEmitterComponent to an entity via a custom system
Run the application on iOS 26.1 or iOS 26.2
Observe that particles render at an offset position away from the entity
Minimal Code Example
Here's the setup from my test case:
Custom Component & System:
struct SparkleComponent4: Component {}
class SparkleSystem4: System {
static let query = EntityQuery(where: .has(SparkleComponent4.self))
required init(scene: Scene) {}
func update(context: SceneUpdateContext) {
for entity in context.scene.performQuery(Self.query) {
// Only add once
if entity.components.has(ParticleEmitterComponent.self) { continue }
var newEmitter = ParticleEmitterComponent()
newEmitter.mainEmitter.color = .constant(.single(.red))
entity.components.set(newEmitter)
}
}
}
AR Setup:
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = Entity()
model.components.set(ModelComponent(mesh: boxMesh, materials: [material]))
model.components.set(SparkleComponent4())
model.position = [0, 0.05, 0]
model.name = "MyCube"
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2]))
anchor.addChild(model)
arView.scene.addAnchor(anchor)
Questions for the Community
Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2?
Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning?
Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing?
Additional Information
I've already submitted this issue via Feedback Assistant(FB21346746)
I'm trying to convert my game from SceneKit to RealityKit. I noticed that even when the scene is static (nothing moves), RealityKit keeps using CPU. In SceneKit, CPU goes down to 0% with a static scene.
With this simplest of games, RealityKit keeps using about 65% CPU:
class ViewController: NSViewController {
override func loadView() {
view = ARView(frame: NSScreen.main!.frame)
}
}
Is this expected or a bug?
I created FB22125047.
I have two devices (iPod, iPhone), each using a different Apple ID. I have an existing game to which I'm adding TBM. When the iPod invites the iPhone, it sends an iMessage invite to the iPhone; when I click on that message, I get "Retrieving", then Game Center in Settings is opened, not my App (same version installed on both devices). I start my App on the iPhone and that match is not shown in the Matchmaker View Controller.
When I send an invite from the iPhone to the iPod and I click on the iMessage invite, the app starts but the match isn't listed in the MatchMaker ViewController on the iPod (but is on the iPhone).
In addition, when I click on the info circle on the iPhone, it who's the two players and "App Store" under the Game Center name. However, When I do the same on the iPod, it has a "Play your turn" there.
Any ideas?
Hey I'm using the CIDepthBlurEffect Core Image Filter in my app. It seems to work ok but I get these errors in the console when calling the class.
CoreImage Metal library does not contain function for name: sparserendering_xhlrb_scan
CoreImage Metal library does not contain function for name: sparserendering_xhlrb_diffuse
CoreImage Metal library does not contain function for name: sparserendering_xhlrb_copy_back
CoreImage Metal library does not contain function for name: plain_or_sRGB_copy
Am I missing some sort of import to gain these Metal functions? I am using my own custom shaders but I assume you'd be able to use them along side the built in ones.
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
Topic:
Graphics & Games
SubTopic:
General
I have been tasked with creating content for the Apple Vision Pro. Just the 3D content and animation, not the programming end of things.
I can't seem to get any kind of mesh deformation animation to import into Reality Composer Pro. By that I mean bones/skin, or even point cache.
I work on PC, and my main software is 3DS Max, but I'm borrowing an iMac for this job, and was instructed to use RCP on it for testing before handing things off to the programmer. My files open and play fine in other USD programs, like Omniverse, or USD View, just not Reality Composer Pro.
I've seen the dinosaur demo in AVP, so I know mesh deformation is possible. If there are other essential tools that might make this possible, I have not been made aware of them.
I am experimenting with bouncing things off of Blender, in case that exports better, but not really having luck there either -though my results are different.
Thanks.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
I'm converting my game from SceneKit to RealityKit. It has a SpriteKit overlay that according to Explore advanced rendering with RealityKit 2 I can add with the code below.
The code runs fine if the SKScene only contains a SKSpriteNode (see the commented out line), but when I add a SKShapeNode with a fillColor instead, the app crashes with this error:
-[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation
MTLDepthStencilDescriptor uses frontFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture
MTLDepthStencilDescriptor uses backFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture
'
I don't know enough about low-level graphics and stencils yet to figure out a quick solution, so I would appreciate if anyone could share an easy fix or explanation of what's wrong. Thanks!
class ViewController: NSViewController {
var device: MTLDevice!
var renderer: SKRenderer!
override func loadView() {
let arView = ARView(frame: NSScreen.main!.frame)
view = arView
arView.renderCallbacks.prepareWithDevice = { [weak self] device in
guard let self = self else { return }
self.device = device
renderer = SKRenderer(device: MTLCreateSystemDefaultDevice()!)
let scene = SKScene()
let shape = SKShapeNode(rectOf: CGSize(width: 10, height: 10))
shape.fillColor = .red
scene.addChild(shape)
// scene.addChild(SKSpriteNode(color: .red, size: CGSize(width: 10, height: 10)))
renderer.scene = scene
}
arView.renderCallbacks.postProcess = { [weak self] context in
guard let self = self else { return }
let encoder = context.commandBuffer.makeBlitCommandEncoder()
encoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture)
encoder?.endEncoding()
renderer.update(atTime: context.time)
let descriptor = MTLRenderPassDescriptor()
descriptor.colorAttachments[0].loadAction = .load
descriptor.colorAttachments[0].storeAction = .store
descriptor.colorAttachments[0].texture = context.targetColorTexture
renderer.render(withViewport: CGRect(x: 0, y: 0, width: context.targetColorTexture.width, height: context.targetColorTexture.height), commandBuffer: context.commandBuffer, renderPassDescriptor: descriptor)
}
}
}
I'm looking to create an effect on iOS that tracks the user's face position with ARKit and shifts nearer/more prominent geometry in the scene around while more "distant" geometry stays fixed to the XY plane - making it look like the geometry on screen "sticks out"
I've managed to implement most of this successfully, but it's not perfect when using PerspectiveCameraComponent in RealityKit because as I shift the camera (and change its field of view based on the user's distance) the backplane changes its orientation (it's always orthogonal to camera's direction).
I've tried adopting ProjectiveTransformCameraComponent instead. The idea is that the camera shifts around the scene, mirroring the user's head's position, looking at (0,0,0) and the back plane is adjusted to be parallel with the X,Y plane (animation replicated in Blender below).
However, I can't manage to set up ProjectiveTransformCameraComponent with an appropriate matrix or update its transform property in a RealityKit System correctly.
I also tried setting many simpler projection matrices as described in a number of guides on camera projection matrices on the internet and all I get is a blank view.
Does anyone have some guidance on what the projection matrix that ProjectiveTransformCameraComponent expects is meant to look like or how I would go about accomplishing my goal?
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x.
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension UIColor {
var float4: Float4 {
if
cgColor.numberOfComponents == 4,
let c = cgColor.components
{
Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3]))
} else {
Float4()
}
}
}
struct ColourAction: EntityAction {
// MARK: - PUBLIC PROPERTIES
let startColour: Float4
let targetColour: Float4
// MARK: - PUBLIC COMPUTED PROPERTIES
var animatedValueType: (any AnimatableData.Type)? { Float4.self }
// MARK: - INITIATION
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
// MARK: - PUBLIC STATIC FUNCTIONS
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
// MARK: - PUBLIC FUNCTIONS
func changeColourTo(_ targetColour: UIColor, duration: Double) {
guard
let modelComponent = components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial
else {
return
}
let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach:
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard
let animationState = event.animationState,
let entity = event.targetEntity,
let modelComponent = entity.components[ModelComponent.self],
var material = modelComponent.materials.first as? PhysicallyBasedMaterial
else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
material.baseColor.tint = UIColor(interpolatedColour)
entity.components[ModelComponent.self]?.materials[0] = material
animationState.storeAnimatedValue(interpolatedColour)
}
}
So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
Dear colleagues, when will you add the ability to manually adjust the display's color temperature? Everyone is familiar with the color rendering issue on the 17 series. Many complain about the display's yellowish tint. I'm one of those people, with a G9N panel, but I can't get whites right; there's a persistent yellow tint. TrueTone only solves this problem under cool, white lighting conditions. So, the display might work as intended, but how can I make it work consistently? If there were a way to manually adjust TrueTone, many users wouldn't be so upset when buying a new device, fearing the display would be yellow. I'd like users to be able to choose their preferred color, warm or cool, and have a scale to adjust it! This would solve the yellowish tint issue on 17 series displays! Thank you.
Topic:
Graphics & Games
SubTopic:
General
Hello! I'm currently porting a videogame console emulator to iOS and I'm trying to make the renderer (tested on MacOS) work on iOS as well.
The emulator core is written in C++ and uses metal-cpp for rendering, whereas the iOS frontend is written in Swift with SwiftUI. I have an Objective-C++ bridging header for bridging the Swift and C++ sides.
On the Swift side, I create an MTKView. Inside the MTKView delegate, I run the emulator for 1 video frame and pass it the view's backing layer for it to render the final output image with. The emulator runs and returns, but when it returns I get a crash in Swift land (callstack attached below), inside objc_release, which indicates I'm doing something wrong with memory management.
My bridging interface (ios_driver.h):
#pragma once
#include <Foundation/Foundation.h>
#include <QuartzCore/QuartzCore.h>
void iosCreateEmulator();
void iosRunFrame(CAMetalLayer* layer);
Bridge implementation (ios_driver.mm):
#import <Foundation/Foundation.h>
extern "C" {
#include "ios_driver.h"
}
<...>
#define IOS_EXPORT extern "C" __attribute__((visibility("default")))
std::unique_ptr<Emulator> emulator = nullptr;
IOS_EXPORT void iosCreateEmulator() { ... }
// Runs 1 video frame of the emulator and
IOS_EXPORT void iosRunFrame(CAMetalLayer* layer) {
void* layerBridged = (__bridge void*)layer;
// Pass the CAMetalLayer to the emulator
emulator->getRenderer()->setMTKLayer(layerBridged);
// Runs the emulator for 1 frame and renders the output image using our layer
emulator->runFrame();
}
My MTKView delegate:
class Renderer: NSObject, MTKViewDelegate {
var parent: ContentView
var device: MTLDevice!
init(_ parent: ContentView) {
self.parent = parent
if let device = MTLCreateSystemDefaultDevice() {
self.device = device
}
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
var metalLayer = view.layer as! CAMetalLayer
// Run the emulator for 1 frame & display the output image
iosRunFrame(metalLayer)
}
}
Finally, the emulator's render function that interacts with the layer:
void RendererMTL::setMTKLayer(void* layer) {
metalLayer = (CA::MetalLayer*)layer;
}
void RendererMTL::display() {
CA::MetalDrawable* drawable = metalLayer->nextDrawable();
if (!drawable) {
return;
}
MTL::Texture* texture = drawable->texture();
<rest of rendering follows here using the drawable & its texture>
}
This is the Swift callstack at the time of the crash:
To my understanding, I shouldn't be violating ARC rules as my bridging header uses CAMetalLayer* instead of void* and Swift will automatically account for ARC when passing CoreFoundation objects to Objective-C. However I don't have any other idea as to what might be causing this. I've been trying to debug this code for a couple of days without much success.
If you need more info, the emulator code is also on Github
Metal renderer: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/core/renderer_mtl/renderer_mtl.cpp#L58-L68
Bridge implementation: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/ios_driver.mm
Bridging header: https://github.com/wheremyfoodat/Panda3DS/blob/ios/include/ios_driver.h
Any help is more than appreciated. Thank you for your time in advance.
I have run into an issue where I am trying to use atomic_float in a swift package but I cannot get things to compile because it appears that the Swift Package Manager doesn't support Metal 3 (atomic_float is Metal 3 functionality). Is there any way around this? I am using
// swift-tools-version: 6.1
and my Metal code includes:
#include <metal_stdlib>
#include <metal_geometric>
#include <metal_math>
#include <metal_atomic>
using namespace metal;
kernel void test(device atomic_float* imageBuffer [[buffer(1)]],
uint id [[ thread_position_in_grid ]]) {
}
But I get an error on the definition of atomic_float .
Any help, one more importantly, where I could have found this information about this limitation, would be helpful.
-RadBobby
Topic:
Graphics & Games
SubTopic:
Metal
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app.
I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle).
Here's my settings for the materials.
Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects.
Please let me know if you know anything helpful, much appreciated!
I use unity 2020.3.48f1 to develop a game; trying to implement Apple Services integration I use Apple unity plugins(https://github.com/apple/unityplugins) Using latest version of unity plugins I getting error in Unity project after plugin import It say "Not allowed platform VisionOS" When I tryed to use older version of the plugins I getting error on runtime when calling "var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();" in line 42 it drop EXC_BAD_ACCESS(code=257, address=0x0000...) error I tryed to use different commits from official repositorys and even custom branches of apple unity plugins like (https://github.com/muZZkat/unityplugins/tree/muzzkat/fix-fetch-items) but it did not help
There is whole my script which trying to use apple unuity plugins
using System.Threading.Tasks;
using UnityEngine;
using System.Collections;
using System;
using Apple.GameKit;
using UnityEngine.UI;
public class TheScript : MonoBehaviour
{
[SerializeField]
InputField otp;
string Signature;
string TeamPlayerID;
string Salt;
string PublicKeyUrl;
string Timestamp;
void Start()
{
StartCoroutine(Call());
}
private IEnumerator Call()
{
yield return new WaitForSeconds(5);
Login();
}
public async Task Login()
{
otp.text += $"Loginig... ";
if (!Apple.GameKit.GKLocalPlayer.Local.IsAuthenticated)
{
try
{
var player = await GKLocalPlayer.Authenticate();
var localPlayer = GKLocalPlayer.Local;
TeamPlayerID = localPlayer.TeamPlayerId;
var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();
Signature = Convert.ToBase64String(fetchItemsResponse.GetSignature());
PublicKeyUrl = fetchItemsResponse.PublicKeyUrl;
otp.text += $"Team Player ID: {TeamPlayerID} ";
otp.text += $"PublicKeyUrl: {PublicKeyUrl} ";
}
catch(Exception e)
{
otp.text += $"Error: " + e.Message;
}
}
else
{
Debug.Log("AppleGameCenter player already logged in.");
}
}
async Task SignInWithAppleGameCenterAsync(string signature, string teamPlayerId, string publicKeyURL, string salt, ulong timestamp)
{
}
}
Turn-based games: 2 players
When an opponent declines a game in the Game Center MatchMaker VC, that player sees that they quit, but no message is sent to the listener about that fact. For the person who started the match, their MMVC shows it's their turn
again. Why doesn't Game Center end the match?
Topic:
Graphics & Games
SubTopic:
General
After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Hi,
I'm trying to add game center challenges and activities to an already live game, but they are not appearing in game for testing, GameCenter, or the Games app.
I know the game is setup with GameKit entitlements since this is a live game and it has working leaderboards and achievements.
I've updated to Tahoe beta 8, added a challenge and activity on app store connect, added that to a new distribution and added that distribution to 'Add for Review'
I'm using Unity and the Apple Unity plugin
Not sure what other steps I'm missing
Thanks