My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed).
There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix.
I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes.
I noticed that:
The issue occurs whether or not the app is sandboxed.
The issue does no longer occur when the app itself runs under Rosetta.
There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback.
With most x86 plugins, the error is on first call:
kAudioUnitErr_RenderTimeout
and on any subsequent call:
kAudioComponentErr_InstanceInvalidated
On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty.
With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits
kAudioUnitErr_NoConnection
from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound.
I also find these messages in the console (printed in that order):
CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them
AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte
My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems.
However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API.
I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options:
In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist?
Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it?
Any tip or idea about this issue will be much appreciated.
Thanks in advance!
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have an app under development - demo here - https://youtu.be/VbAfUk_eYl0?si=s6EDBx-4G6P_QbZO - which is sort of an audio player for airdropped files - something useful to musicians who dump work in progress to their phone, make notes, revise and update.
I've been testing my handling of audio session interruption notifications, but seems to be a lot of inconsistency in how, when and why iOS delivers them, and I'm wondering if there is some rhyme or reason to it that I'm just not detecting.
For example, I am playing a song in my app. Switch to Apple Music and start playing a song there. My app gets an interruption began notification - this is consistent.
Switch back to my app, and about half the time, I will get an interruption ended notification (coupled often with a blast of the tail of whatever audio buffer was partially played when the interruption started, even though the engine was stopped - and followed by call to my AVAudioPlayerNodeCompletionCallback - is there some way to avoid this?). Half the time I don't get an interruption ended notification; my app can (as expected) end the interruption by activating the AVAudioSession and playing something.
I have not been able to determine any pattern to this behavior, other than that if my app started playing using AVAudioPlayerNode.scheduleSegment rather than scheduleFile I think the notification will be consistently delivered on app activation rather than when I activate the session programmatically.
I would like my app to behave deterministically, and would appreciate any help in deciphering what causes the inconsistent behavior in notifications from iOS.
I'm developing the VisionOS app. I want to know how to play spatial audio in addition to RealityKit? If it's iOS or macOS, how to play spatial audio in addition to RealityKit?
Bug Report: ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS
Summary
When using ScreenCaptureKit to capture system audio for extended periods, the application crashes with EXC_BAD_ACCESS in Swift's error handling runtime. The crash occurs in swift_getErrorValue when trying to process an error from the SCStream delegate method didStopWithError. This appears to be a framework-level issue in ScreenCaptureKit or its underlying ReplayKit implementation.
Environment
macOS Sonoma 14.6.1
Swift 5.8
ScreenCaptureKit framework
Detailed Description
Our application captures system audio using ScreenCaptureKit's audio capture capabilities. After successfully capturing for several minutes (typically after 3-4 segments of 60-second recordings), the application crashes with an EXC_BAD_ACCESS error. The crash happens when the Swift runtime attempts to process an error in the SCStreamDelegate.stream(_:didStopWithError:) method.
The crash consistently occurs in swift_getErrorValue when attempting to access the class of what appears to be a null object. This suggests that the error being passed from the system framework to our delegate method is malformed or contains invalid memory.
Steps to Reproduce
Create an SCStream with audio capture enabled
Add audio output to the stream
Start capture and write audio data to disk
Allow the capture to run for several minutes (3-5 minutes typically triggers the issue)
The app will crash with EXC_BAD_ACCESS in swift_getErrorValue
Code Sample
func stream(_ stream: SCStream, didStopWithError error: Error) {
print("Stream stopped with error: \(error)") // Crash occurs before this line executes
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
guard type == .audio, sampleBuffer.isValid else { return }
// Process audio data...
}
Expected Behavior
The error should be properly propagated to the delegate method, allowing for graceful error handling and recovery.
Actual Behavior
The application crashes with EXC_BAD_ACCESS when the Swift runtime attempts to process the error in swift_getErrorValue.
Crash Log Details
Thread #35, queue = 'com.apple.NSXPCConnection.m-user.com.apple.replayd', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
frame #0: 0x0000000194c3088c libswiftCore.dylib`swift::_swift_getClass(void const*) + 8
frame #1: 0x0000000194c30104 libswiftCore.dylib`swift_getErrorValue + 40
frame #2: 0x00000001057fba30 shadow`NewScreenCaptureService.stream(stream=0x0000600002de6700, error=Swift.Error @ 0x000000016b7b5e30) at NEW+ScreenCaptureService.swift:365:15
frame #3: 0x00000001057fc050 shadow`@objc NewScreenCaptureService.stream(_:didStopWithError:) at <compiler-generated>:0
frame #4: 0x0000000219ec5ca0 ScreenCaptureKit`-[SCStreamManager stream:didStopWithError:] + 456
frame #5: 0x00000001ca68a5cc ReplayKit`-[RPScreenRecorder stream:didStopWithError:] + 84
frame #6: 0x00000001ca696ff8 ReplayKit`-[RPDaemonProxy stream:didStopWithError:] + 224
Printing description of stream._streamQueue:
error: ObjectiveC.id:4294967281:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
error: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:1:65: 'id' is unavailable in Swift: 'id' is not available in Swift; use 'Any'
Swift._DebuggerSupport.stringForPrintObject(Swift.UnsafePointer<id>(bitPattern: 0x104ae08c0)!.pointee)
^~
ObjectiveC.id:2:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
warning: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:5:7: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it
var $__lldb_error_result = __lldb_tmp_error
~~~~^~~~~~~~~~~~~~~~~~~~
_
Before the crash, we observed this error message in the console:
[ERROR] *****SCStream*****RemoteAudioQueueOperationHandlerWithError:1015 Error received from the remote queue -16665
Additional Context
The issue occurs consistently after approximately 3-4 successful audio segment recordings of 60 seconds each
Commenting out custom segment rotation logic does not prevent the crash
The crash involves XPC communication with Apple's ReplayKit daemon
The error appears to be corrupted or malformed when crossing the XPC boundary
Workarounds Attempted
Added proper thread safety for all published properties using DispatchQueue.main.async
Implemented more robust error handling in the delegate methods
None of these approaches prevented the crash since it occurs at the Swift runtime level before our code executes.
Impact
This issue prevents reliable long-duration audio capture using ScreenCaptureKit.
This bug significantly limits the usefulness of ScreenCaptureKit for any application requiring continuous system audio capture for more than a few minutes.
Perhaps this issue might be related to a macOS bug where the system dialog indicates that the screen is being shared, even though nothing is actually being shared. Moreover, when attempting to stop sharing, nothing happens.
Hello,
I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works.
I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record.
Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS)
thanks
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used".
Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual.
For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup.
HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier.
The dext works just fine, as far as we can ascertain, just as well as a HAL plugin.
So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system. The symptom is that the AirPlay route picker just spins when trying to connect to an audio system. It eventually gives up.
I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro. If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played.
I'm the developer of an app that plays audio over AirPlay while also recording. My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord. This issue is a real pain as it means my app is suddenly broken for many thousands of users.
Is anyone else seeing this issue? Any suggestions for a workaround?
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID?
I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result.
This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID).
My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
I'm developing an iOS app that requires continuous audio recording.
Currently, when a phone call comes in, the AVAudioSession is interrupted and recording stops completely during the ringing phase.
While I understand recording should stop if the call is answered, my app needs to continue recording while the phone is merely ringing.
I've observed that Apple's Voice Memos app maintains recording during incoming call rings. This indicates the hardware and iOS are capable of supporting this functionality.
Request
Please advise on any available AVAudioSession configurations or APIs that would allow my app to:
Continue recording during an incoming call ring
Only stop recording if/when the call is actually answered
Impact
This interruption significantly impacts the user experience and core functionality of my app. Workarounds like asking users to enable airplane mode are impractical and create a poor user experience.
Questions
Is there an approved way to maintain microphone access during call rings?
If not currently possible, could this capability be considered for addition to a future iOS SDK?
Are there any interim solutions or best practices Apple recommends for this use case?
Thank you for your help.
SUPPORT INFORMATION
Did someone from Apple ask you to submit a code-level support request?
No
Do you have a focused test project that demonstrates your issue?
Yes, I have a focused test project to submit with my request
What code level support issue are you having?
Problems with an Apple framework API in my app
Hello everyone,
I'm implementing the new AVInputPickerInteraction API on iOS 26 to allow users to select their microphone from a custom settings menu before recording.
The implementation seems correct, but I'm encountering a strange issue where the input selection immediately reverts to the previous device.
The Situation:
The picker is presented correctly via a manual call to .present(). I can see all available inputs (e.g., "iPhone Microphone" and "AirPods").
The current input is "iPhone Microphone".
I tap on "AirPods".
The UI updates to show "AirPods" as selected for a fraction of a second, then immediately jumps back to "iPhone Microphone".
The same thing happens in reverse.
It seems like the system is automatically reverting the audio route change requested by the picker.
My Implementation:
My setup follows the standard pattern discussed in the WWDC sessions.
Setup Code:
This setup is performed once before the user can trigger the picker.
@available(iOS 26.0, *)
var inputPickerInteraction: AVInputPickerInteraction?
// Note: The AVAudioSession is configured to .playAndRecord
// and set to active elsewhere in the code before this setup is called.
if #available(iOS 26.0, *) {
// Setup the picker
let picker = AVInputPickerInteraction()
self.inputPickerInteraction = picker
self.view.addInteraction(picker) // Added to establish context
}
Presentation Code:
When a user selects "Change Input" from my custom settings menu, I call .present() on the main thread.
// In a delegate method from a custom menu
if #available(iOS 26.0, *) {
DispatchQueue.main.async {
self.inputPickerInteraction?.present(animated: true)
}
}
What I've already checked:
The AVAudioSession is active and its category is .playAndRecord.
The inputPickerInteraction object is not nil.
The .present() method is being called on the main thread.
The picker is added to a view using view.addInteraction() in the setup phase.
I've reviewed my code to ensure there is no other logic that could be manually resetting the AVAudioSession's preferred input.
Has anyone else experienced this behavior? I suspect this might be a bug in the new API, but I want to make sure I'm not missing a crucial step in managing the AVAudioSession state.
Any insights or potential workarounds would be greatly appreciated.
Thank you.
Topic:
Media Technologies
SubTopic:
Audio
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)
I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are:
NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation
I attached a log from Xcode organizer matching Bugsnag crash.
mpr_remote_command_event.crash
When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button.
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122)
3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66)
4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76)
5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496)
6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156)
7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88)
9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769)
10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179)
11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462)
12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049)
13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902)
14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577)
15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15)
16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477)
Is the crash happening when the app is being terminated?
Thank you!
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias
Overview
We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended.
Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below).
Code
The setup is rather simple. We took inspiration from a few sources around the web.
NSMutableDictionary *audio = [[NSMutableDictionary alloc] init];
[audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey];
[audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000
forKey:AVSampleRateKey];
[audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2
forKey:AVNumberOfChannelsKey];
[audio setObject:@160000 forKey:AVEncoderBitRateKey];
m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio];
m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:m_audioConfig];
AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount;
AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat
frameCapacity:audioFrames];
pcmBuffer.frameLength = pcmBuffer.frameCapacity;
AudioChannelLayout layout;
memset(&layout, 0, sizeof(layout));
layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
CMFormatDescriptionRef format;
OSStatus stats = CMAudioFormatDescriptionCreate(
kCFAllocatorDefault,
pcmBuffer.format.streamDescription,
sizeof(layout),
&layout,
0,
nil,
nil,
&format
);
for (int i = 0; i < bCount; i++)
{
AudioPCM pcm;
audioCallback->callback(pcm);
memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize);
}
size_t samplesConsumed = BUFFER_SAMPLES * bCount;
CMSampleBufferRef sampleBuffer;
CMSampleTimingInfo timing;
timing.duration = CMTimeMake(1, config.audioSampleRate);
timing.presentationTimeStamp = presentationTime;
timing.decodeTimeStamp = kCMTimeInvalid;
OSStatus ostatus = CMSampleBufferCreate(
kCFAllocatorDefault,
nil,
false,
nil,
nil,
format,
(CMItemCount)pcmBuffer.frameLength,
1,
&timing,
0,
nil,
&sampleBuffer
);
////
ostatus = CMSampleBufferSetDataBufferFromAudioBufferList(
sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
pcmBuffer.audioBufferList
);
if (ostatus != noErr)
{
NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus));
return;
}
ostatus = CMSampleBufferSetDataReady(sampleBuffer);
if (ostatus != noErr)
{
NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus));
return;
}
// Finally we can attach it, then shove the presentation time forward
[m_audio appendSampleBuffer:sampleBuffer];
The Crash
The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say.
0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636
1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112
2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68
3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196
4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16
5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84
6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116
7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808
8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84
9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60
10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72
11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296
12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720
13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100
14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184
15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960
16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816
17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192
18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500
19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472
20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128
21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168
22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052
23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72
24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136
Any insight would be welcome!
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
Many Apple users own both Bluetooth earphones (AirPods) and traditional wired earphones. While Bluetooth audio provides freedom of movement, some users still prefer wired earphones for comfort, sound profile, or personal preference. However, plugging wired earphones directly into an iPhone can feel restrictive and inconvenient during daily use.
This proposal suggests a hybrid audio approach where wired earphones can be connected to a Bluetooth-enabled AirPods charging case (or a similar Apple-designed module), allowing users to enjoy wired earphones without a physical connection to the iPhone.
#Problem Statement
*Wired earphones offer consistent audio quality and zero latency
*Bluetooth earphones provide freedom from cables
*Users must currently choose one or the other
*Plugging wired earphones into an iPhone limits movement and can feel intrusive in daily scenarios (walking, commuting, working)
There is no native Apple solution that allows wired earphones to function wirelessly while maintaining Apple’s audio experience standards.
#Proposed Solution
Introduce a Wired-to-Wireless Audio Mode through the AirPods charging case or a dedicated Apple Bluetooth audio bridge.
How it works:
User plugs wired earphones into the AirPods case (or a future AirPods accessory port)
The case acts as a Bluetooth audio transmitter
Audio is streamed wirelessly from iPhone to the case
The case outputs audio to the wired earphones
#User experiences:
No cable connected to the iPhone
Familiar wired earphone sound
Freedom of movement similar to Bluetooth earbuds
User Experience (UX Flow)
Plug wired earphones into the AirPods case
iPhone automatically detects:
“Wired Earphones via AirPods Case”
Seamless pairing using existing AirPods framework
Audio controls, volume, and switching handled through iOS
No additional apps required
#Key Benefits
Combines wired sound reliability with wireless convenience
Reduces physical cable disturbance during use
Extends usefulness of existing wired earphones
Minimal learning curve for users
Fits naturally into Apple’s ecosystem and design philosophy
#Privacy & Performance Considerations
On-device audio processing only
No cloud involvement
Low-latency audio using Apple’s proprietary Bluetooth codecs
Power-efficient usage leveraging AirPods case battery
#Target Users
Users who prefer wired earphones but want wireless freedom
Commuters and walkers
Developers and professionals who multitask
Users sensitive to Bluetooth earbud fit or comfort
#Ecosystem Fit
Builds on existing AirPods pairing and audio stack
Aligns with Apple’s focus on seamless UX
Could be implemented via:
New AirPods hardware
Firmware update + accessory
Dedicated Apple audio bridge
Hello,
The search functionality of the coreaudio-api mailing list archive has been broken for a very long time. Several of the lower-level audio APIs have only been discussed on this mailing list, making it critical for those of us maintaining old audio code.
Steps to reproduce:
Open https://lists.apple.com/archives/list/coreaudio-api@lists.apple.com/ in your web browser.
Enter a search term in the "Search this list" field in the top-right corner of the page.
The search will eventually time out with "502 Bad Gateway"
Can somebody please forward this information to the current maintainer? I've tried to contact developer support but they weren't sure what to do.
Thanks!
Topic:
Media Technologies
SubTopic:
Audio
The device is connected to Bluetooth A and Bluetooth B, currently the audio is played through Bluetooth A, click the interface button, how to realize the code to switch to Bluetooth B?
I've got a problem with my app where I'm testing it on my own phone.
I'm using audio kit to generate tones as part of the app. Everything seems to work fine. Sounds start, Stop, etc. They play when the app is closed and when the phone is locked, so background is working.
However, I'm seeing an issue where, even when STOP is pressed and the application exited, if I get a notification such as a text message, the base tone for the app starts to play.
If I then open the app, check the Start/Stop button - it says start so that. hasnt' been activated. If I click Start, then a 2nd tone starts. This one stops with the Stop button. However the original tone that was set off by an incoming message carries on playing.
Until I go to the Open Apps View on the phone and slide the application upwards.
For the life of me, I can't figure out whats happening here.
Since many users like me use Apple Music on Android, the app is almost as feature-rich as iOS. It would be fantastic if the developers could add the new iOS 26 features to the Android app, along with a minor UI change. I know it’s challenging to implement liquid glass on Android hardware or design, but features like auto-mix, pronunciation, and translation could be added.
kindly consider this request !!!!