For devices that are still on ios17, playing Fairplay encrypted content still works fine. For devices that I've upgraded to ios26 playing the same content in the same app no longer works. I can advance and see the stream frames by tapping +10 scrubbing so I know that the content is being decrypted but tapping the play button of AVPlayer for an AVPlayerItem now does nothing in ios26. Is this a breaking change or is there a stricter requirement that I now have to implement?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured.
Setup:
Main App (Flutter) and TTS Audio Unit Extension share the same App Group
App Group is properly configured in developer portal and entitlements
Main app successfully creates and uses files in the container
Container structure shows existing directories (config/, dictionary/) with populated files
Both targets have App Group capability enabled and entitlements set
Current behavior:
Extension can access/read the App Group container
Extension can see existing directories and files
All write attempts are blocked with "sandbox deny(1) file-write-create" errors
Code example:
const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) {
NSString* groupIdStr = [NSString stringWithUTF8String:groupId];
NSString* componentStr = [NSString stringWithUTF8String:component];
NSURL* url = [[NSFileManager defaultManager]
containerURLForSecurityApplicationGroupIdentifier:groupIdStr];
NSURL* fullPath = [url URLByAppendingPathComponent:componentStr];
NSError *error = nil;
if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path
withIntermediateDirectories:YES
attributes:nil
error:&error]) {
NSLog(@"Unable to create directory %@", error.localizedDescription);
}
return [[fullPath path] UTF8String];
}
Error output:
Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace
Key questions:
Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers?
Is this a known limitation of TTS Audio Unit Extensions?
What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions?
If writing to App Group containers is not supported, what alternatives are available?
Current entitlements:
<dict>
<key>com.apple.security.application-groups</key>
<array>
<string>group.com.<company>.<appname></string>
</array>
</dict>
Hi, I'm working an a video editing software that lets you composite and export videos. I use a custom compositor to apply my effects etc.
In my crash dashboard, I am seeing a report of an EXC_BAD_ACCESS crash from objc_msgSend. Below is the stacktrace.
libobjc.A.dylib objc_msgSend
libdispatch.dylib _dispatch_sync_invoke_and_complete_recurse
libdispatch.dylib _dispatch_sync_f_slow
[symbolication failed]
libdispatch.dylib _dispatch_client_callout
libdispatch.dylib _dispatch_lane_barrier_sync_invoke_and_complete
AVFCore -[AVCustomVideoCompositorSession(AVCustomVideoCompositorSession_FigCallbackHandling) _customCompositorShouldCancelPendingFrames]
AVFCore _customCompositorShouldCancelPendingFramesCallback
MediaToolbox remoteVideoCompositor_HandleVideoCompositorClientMessage
CoreMedia __figXPCConnection_CallClientMessageHandlers_block_invoke
libdispatch.dylib _dispatch_call_block_and_release
libdispatch.dylib _dispatch_client_callout
libdispatch.dylib _dispatch_lane_serial_drain
libdispatch.dylib _dispatch_lane_invoke
libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh
libdispatch.dylib _dispatch_workloop_worker_thread
libsystem_pthread.dylib _pthread_wqthread
libsystem_pthread.dylib start_wqthread
What stood out to me is that this is only being reported from IOS 26.0+ devices. A part of the stacktrace failed to be symbolicated [symbolication failed]. I'm 90% confident that this is Apple code, not my app's code.
I cannot reproduce this locally. Is this a known issue? What are the possible root-causes, and how can I verify/eliminate them?
Thanks,
Hi there,
Can anyone tell me how to possibly get approved as an Apple News Publisher in 2025?
We attempted in 2024, but received this message from Apple support:
"Thank you for your interest in Apple News. At this time, we're not accepting new applications."
When I inquired further, I got this second response:
"Apple News is no longer accepting unsolicited applications. To learn more about Apple News requirements, visit the Apple News support page. If you have any feedback, please use this form to send us your comments. Keep in mind that while we read all feedback, we are unable to respond to each submission individually."
My questions are:
Is this still the case? (Especially when you are a legit local news outlet)
Is there a link to apply as a news publisher? I don't seem to have that option at all.
Thanks for any feedback.
Fetching the featured artists in a playlist, no longer works in iOS 26.1 beta
let detailedPlaylist = try await playlist.with([.tracks, .featuredArtists], preferredSource: .library)
Throws error when using .library and using .catalog returns empty array.
This works correctly in iOS 26.0 and iOS 18 versions
Hi, I'm trying to plan out development of an app and am wondering if it is possible to have user generated content automatically populate into a custom shazamkit catalogue and be able to query this catalogue non-locally?
Storing all the submissions locally would obviously not scale.
I'm working on a project to support spatial audio editing, using this sample project as a reference: https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix
This sample works well on an unedited capture, but does not work for a capture that has already been edited.
The failure is occurring at "let audioInfo = try await CNAssetSpatialAudioInfo(asset: myAsset)", which is throwing "no eligible audio tracks in asset".
I also find that for already edited captures, if i use CNAssetSpatialAudioInfo.assetContainsSpatialAudio, it returns false.
What i mean by "already edited" is that if I take a spatial capture with my iPhone 16, and then edit that capture in the Photos app using the Cinematic effect, and then save the edited output (e.g. edited_capture.mov), I can't import that edited_capture.mov into my project as a spatial audio asset.
Is this intentional behavior or a bug?
If it's intentional, can you describe why?
Topic:
Media Technologies
SubTopic:
Audio
According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
add a currently playing track endpoint on the apple music api. its kinda wild how apple music goes after spotify without having such a useful endpoint.
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player)
the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession
Sample code below
the problem I am having is that .duckOthers is not ducking the Application Music Player output
Is this a bug or am I doing this wrong?
// Configure audio session for system-wide ducking
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
// Set the ducking level to maximum
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005)
// Create and configure audio player
self.audioPlayer = try AVAudioPlayer(data: audioData)
self.audioPlayer?.delegate = self
self.audioPlayer?.volume = 1.0 // Ensure full volume for speech
self.audioPlayer?.prepareToPlay()
// Set the audio player's settings for maximum clarity
self.audioPlayer?.enableRate = false
self.audioPlayer?.pan = 0.0 // Center the audio
self.audioPlayer?.play()
We are seeing logs were on iOS devices we see some keyframes request.
but on safari browser don’t see any request like this. could you please explain what is it.
/d8ceb9244ff889b42b82eb807327531-c27dbcb10e0bbf3cde6c-1/d8ceb9244ff88e9b42b82eb807327531-c27dbcb10e0bbf3cde6c-1/keyframes/hls/.
In Final Cut Pro, keyframes for transform parameters (such as Position, Scale, and Rotation) are automatically set to “Smooth” interpolation. This often results in undesired easing between keyframes, especially when linear motion is required.
Currently, we have to manually adjust each keyframe to "Linear" using the Video Animation Editor, which can be time-consuming when working with many keyframes.
Would it be possible to add an option to set the default keyframe interpolation to "Linear"—either globally in Preferences or per parameter in the Inspector?
This would greatly streamline the animation workflow for many editors.
Thank you for considering this request!
Just downloaded iOS 26.1 and my phone keeps ringing after the call has been answered. Any fixes for this?
Topic:
Media Technologies
SubTopic:
Photos & Camera
who can help me to make app for iOS with XCode
Topic:
Media Technologies
SubTopic:
Streaming
Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
Hey folks, I'm running into an odd issue suddenly with an app that had a working MusicKit integration before.
I'm using ApplicationMusicPlayer to play Apple Music albums and songs. I'm testing on a physical device, signed in to Apple ID, and with a valid subscription. Apple Music via the first-party app works entirely fine on this device.
Attempting to play back any content at all gives the log:
<ICUserIdentityStoreACAccountBackend: 0x1070bf3e0> Failed to initialize primary apple account, error=Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}
[ICUserIdentityStore] - initializing account histories with activeAccountDSID = nil, activeLockerAccountDSID = nil, timestamp = 14605951908
[ICUserIdentityStore] Failed to fetch local store account with error: Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}.
The album artwork, track names, etc, all appear in the control center playback controls, but the music doesn't play. Trying to trigger playback with control center just results in it skipping to the next track, which doesn't play either.
This exact code used to work. I have the MusicKit service selected in Apple Connect. Since this isn't entitlement-based, I'm not sure how else to check that I'm set up correctly.
I've tried deleting/reinstalling the app, restarting the device, cleaning/rebuilding, and deleting DerivedData, to no avail.
Any help?
Running Xcode 16.4 (16F6), testing on iOS 18.5 (22F76)
Hello,
I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case.
Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur.
I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback' for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated.
Thanks.
I’m running HomePod OS 26 on two HomePod minis and OS 18.6 on main HomePod (original)
I’ve enabled Crossfade in the Home app.
I’m playing Apple Music directly in the HomePod mini.
Crossfade just doesn’t work on any HomePod.
I can understand it not working on the HomePod - but why isn’t it working on the minis running OS 26?
I’ve tried disabling and enabling Crossfade, rebooting HomePods etc but nothing?!
How can media resources in my app be recommended to the system media control center, just like TikTok in the picture
I am currently developing an HEVC player using VideoToolbox on an iOS device.
I have successfully created an HEVC decoder that receives HEVC streams from our custom image capture and encoding device, and it can decode and display images properly.
However, when my image capture and encoding device configures the encoder to output HEVC streams with fragmented NALUs (i.e., an I-frame or P-frame is split and stored across multiple slice NALUs), the iOS decoder can be initialized successfully but fails to decode and output images.
Can VideoToolbox properly decode HEVC bitstreams when a single frame is split into multiple slice NALUs?
Key Observations:
1. Single-NALU frames work fine.
2. Multi-NALU frames (sliced I/P-frames) cause decoding failure.
3. The decoder session is created successfully (VTDecompressionSessionCreate returns no error).