Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Sound not working on testflight / Appstore
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc. In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be. I have also gone through the archive that i uploaded and verified that the sound files are indeed there. I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue. Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently? Thanks in advanced
2
0
232
Dec ’25
Disconnect from AirPlay device programmatically
Hello there, I'm trying to implement feature which uses AirPlay with Apple TV. I want to disconnect from the device programmatically when something happens. Under something I mean a situation when a user wants to stop broadcasting (for example close the PiP window on his phone). I use this snippet: try audioSession.setCategory(.playAndRecord, options: .defaultToSpeaker) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) It works fine sometimes but not always (it works on iOS 18 but it doesn't on iOS 17 or ). So I thought it's a bug and create a ticker to feedback assistant (FB21220013). The support told me write a post on the forum.
2
0
420
Dec ’25
FairPlay-Protected HLS Files Not Transferred via Quick Start
FairPlay-Protected HLS Files Not Transferred via Quick Start I have an iOS app that downloads HLS files, which are protected by FairPlay. These files are stored locally, and their locations are managed using Core Data. When playing these tracks, I use AVURLAsset to access the stored file paths. Recently, a client upgraded to a new iPhone and used Quick Start to transfer data from his old device. While all other app data was successfully transferred, including Core Data records and UserDefaults, the actual HLS files were missing. As a result, the app retained metadata about the downloaded content, but the files themselves were gone, causing playback failures. Does Quick Start exclude certain types of locally stored files, especially DRM-protected HLS downloads, or is the issue related to how FairPlay-protected content is handled during the transfer of locally stored files?
2
0
233
Mar ’25
Live HLS Stream Not Playing After Window Resize in Vision Pro
Hi everyone, I'm developing a visionOS app for Apple Vision Pro, and I've encountered an issue related to window resizing at runtime when using AVPlayer to play a live HLS stream. ✅ What I'm Trying to Do Play a live HLS stream (from Wowza) inside my app using AVPlayer. Support resizing the immersive window using Vision Pro’s built-in runtime scaling gesture. Stream works fine at default window size when the app launches. ❌ Problem If I resize the app’s window at runtime (using the Vision Pro pinch-drag gesture), then try to start the stream, it does not play. Instead, it just shows the "Loading live stream..." state and never proceeds to playback. This issue only occurs after resizing the window — if I don’t resize, the stream works perfectly every time. 🧪 What I’ve Tried Verified the HLS URL — it’s working and plays fine in Safari and in the app before resizing. Set .automaticallyWaitsToMinimizeStalling = false on AVPlayer. Observed that .status on AVPlayerItem never reaches .readyToPlay after resizing. Tried to force window size back using UIWindowScene.requestGeometryUpdate(...), but behavior persists.
2
0
271
Jul ’25
macOS Sonoma 'Cannot Decode' HLS Video
I use AVPlayer to play HLS video successfully on macOS Sonoma, but I encountered this error on macOS Sequoia. Please help me: Error Domain=AVFoundationErrorDomain Code=-11833 ‘Cannot Decode’ UserInfo={NSUnderlyingError=0x600001e57330 {Error Domain=CoreMediaErrorDomain Code=-12906 ‘(null)’}, NSLocalizedFailureReason=The decoder required for this media cannot be found., AVErrorMediaTypeKey=vide, NSLocalizedDescription=Cannot Decode} Thanks!
2
1
673
Mar ’25
iOS 26 Beta Personal Voice bug affecting AVSpeechSynthesizer
I have sent in a feedback report (FB18222398) but I have no idea if anyone has looked at it. I know from past experiences that Apple devs do look at these forums. This applies to each of the betas, 1, 2 and 3. I have created a new Personal Voice with each beta. I create a personal voice in English. When it's done processing, I tap Preview and it says in English what is expected. But after some time, an hour or a day, the language of the voice file changes languages and no longer works properly. If I press Preview it is no longer intelligible. I have a text to speech app and initially the created voice works but then when the language of the file changes, it no longer works. I have run an app on my iphone through Xcode that prints to the console the voices installed on the device with the language. Currently this is the voice file: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: es-MX and on a second device the same personal voice is in a different language: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: zh-CN Although, a previous personal voice file that listed as Spanish-Mexican played in English with a Spanish accent or when playing Spanish text, it sounded almost perfect. This current personal voice doesn't do that, and is unintelligible. Previous attempts have converted to Chinese. I hope someone can look into this.
2
0
540
Dec ’25
Attaching color properties to CVPixelBufferRef
I believe this should work: CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionaryAddValue(attrs, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2); CFDictionaryAddValue(attrs, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2); CFDictionaryAddValue(attrs, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2); CVPixelBufferRef pixelBuffer = NULL; CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs, &pixelBuffer); assert(CFDictionaryGetCount(CVBufferGetAttachments(pixelBuffer, kCVAttachmentMode_ShouldPropagate)) > 0); But that last assert fails, so it appears the color info does not get attached. kCVImageBufferColorPrimariesKey and the others are not one of the keys listed under BufferAttributeKeys, but I think they're supposed to be allowed because they're listed by CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers(). I'm hoping that putting the color matrix info in there will control how AVAssetWriter converts the RGB to YCbCr.
2
0
398
Nov ’25
I cannot acquire entitlement named com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning.
AVCaptureVideoDataOutput.preparesCellularRadioForNetworkConnection requires com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning. But I cannot acquire its entitlement. I can't find its entitlement on 'Certificates, Identifiers & Profiles'. Any solutions? Provisioning profile "iOS Team Provisioning Profile: ......" doesn't include the com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning entitlement.
2
0
523
Jul ’25
Issues with "AVMetricEventStreamPublisher Discover Media Performance Metrics in AVFoundation" Example Code
Hi everyone! I’ve been working with AVFoundation and trying to use the AVMetricEventStreamPublisher to discover media performance metrics, as described in the Apple documentation. https://developer.apple.com/cn/videos/play/wwdc2024/10113/?time=508 However, when following the example code, I’m not getting the expected results. The performance metrics for both audio and video don’t seem to be captured properly. Has anyone successfully used this example code? If so, could you share your experience or any solutions you’ve found? Any tips or insights would be greatly appreciated. Thanks in advance! Ps. the example code: AVPlayerItem *item = ... AVMetricEventStream *eventStream = [AVMetricEventStream eventStream]; id subscriber = [[MyMetricSubscriber alloc] init]; [eventStream setSubscriber:subscriber queue:mySerialQueue] [eventStream subscribeToMetricEvent:[AVMetricPlayerItemLikelyToKeepUpEvent class]]; [eventStream subscribeToMetricEvent:[AVMetricPlayerItemPlaybackSummaryEvent class]]; [eventStream addPublisher:item];
2
0
208
Nov ’25
Will the new Automix feature in iOS 26 be available for third-party apps using MusicKit?
Hi everyone, We’re currently developing a music-based app using MusicKit, and we recently noticed that iOS 26 beta introduces a new “Automix” feature in the Apple Music app. This enables seamless DJ-style transitions between songs—beyond the standard crossfade functionality. We’re trying to understand: Will this Automix feature be accessible to third-party apps that use MusicKit? If not available in the initial iOS 26 release, is there a plan to expose it through public APIs in a future update? Is there any technical documentation, WWDC session, or roadmap info regarding Automix support via MusicKit? This functionality would be a significant enhancement for our app, especially for intelligent audio transitions and curated playlists. Thanks.
2
3
848
Nov ’25
"Baking together" two audio tracks into one for drag-and-drop
Hi all, with my app ScreenFloat, you can record your screen, along with system- and microphone audio. Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on. Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one. So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app. But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow. My approach is this: "Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback. I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest). So, my question is, is there a faster way? The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data). All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal. I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps. I'd appreciate any ideas or pointers. Thank you kindly, Matthias
2
0
712
Mar ’25
builtInLiDARDepthCamera doesn't work on the 2020 iPad Pro on iOS 26
On iOS 26.1, this throws on the 2020 iPad Pro (4th gen) but works fine on an M4 iPad Pro or iPhone 15 Pro: guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { throw ConfigurationError.lidarDeviceUnavailable } It's just the standard code from Apple's own sample code so obviously used to work: https://developer.apple.com/documentation/AVFoundation/capturing-depth-using-the-lidar-camera Does it fail because Apple have silently dumped support for the older LiDAR sensor used prior to the M4 iPad Pro, or is there another reason? What about the 5th and 6th gen iPad Pro, does it still work on those?
2
0
493
Nov ’25
PHPhotoLibrary.performChanges completionHandler not called when deleting assets on iOS 26
In my app, I use api provided in Photos framework to delete specified photo. But after upgrading to iOS 26, the delete function in some iOS device no longer work. The api will never triggers the system confirmation dialog, and the completionHandler is never called. In the iOS Photos app, deletion works correctly on the same assets, but calling the API from my app does not work. Steps to Reproduce Make sure the app has Full Photo Library Access. Execute the following code: PHPhotoLibrary.shared().performChanges({ let assetsToBeDeleted = PHAsset.fetchAssets(withLocalIdentifiers: delUrls, options: nil) PHAssetChangeRequest.deleteAssets(assetsToBeDeleted) }, completionHandler: completionHandler) Expected Behavior The system should present a confirmation dialog asking the user to delete the selected photos. After the user confirms, the deletion should occur, and the completionHandler should be called with success or error. Actual Behavior The system delete confirmation dialog does not appear. The completionHandler is never called. Environment iOS Versions: 26.1 / 26.0.1 It looks like api bug. I want to check Is it a know issue and will be fixed. Thanks
2
1
288
Nov ’25
Hosting x86 Audio Units on Silicon Mac
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed). There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix. I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes. I noticed that: The issue occurs whether or not the app is sandboxed. The issue does no longer occur when the app itself runs under Rosetta. There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback. With most x86 plugins, the error is on first call: kAudioUnitErr_RenderTimeout and on any subsequent call: kAudioComponentErr_InstanceInvalidated On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty. With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits kAudioUnitErr_NoConnection from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound. I also find these messages in the console (printed in that order): CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems. However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API. I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options: In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist? Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it? Any tip or idea about this issue will be much appreciated. Thanks in advance!
2
0
630
Nov ’25
iPad app on macOS not asking for microphone permission
Hello, I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works. I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record. Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS) thanks
2
1
865
Mar ’25
AVPlayer HLS High Bitrate Problem on Apple TV HD (A1625) tvOS 26
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
2
1
587
Oct ’25
APMP & Photography?
Hi, I'm a fan of the gallery in vision pro which has video as well as still photography but I'm wondering if Apple has considered adding the projected media tags to heic so that we can go that next step from Spatial photos to Immersive photos. I have a device that can give me 12k x 6k fisheye images in HDR, but it can't do it at a framerate or resolution that's good enough for video, so I want to cut my losses and show off immersive photos instead. Is there something Apple is already working on for APMP stills or should I create my own app that reads metadata inside a HEIC that I infer in a similar way to the demo "ProjectedMediaConversion" is doing for Video. It would be great to have 180VR photos, which could show as Spatial in a gallery view, but going immersive would half-surround you instead of floating in the blurred view. I think that would be a pretty amazing effect.
2
0
322
Oct ’25