Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

Level Networking on watchOS for Duplex audio streaming
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows: private func configureAudioSession(completion: @escaping (Bool) -> Void) { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: []) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) // Retrieve sample rate and configure the audio format. let sampleRate = audioSession.sampleRate print("Active hardware sample rate: \(sampleRate)") audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1) // Configure the audio engine. audioInputNode = audioEngine.inputNode audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat) try audioEngine.start() completion(true) } catch { print("Error configuring audio session: \(error.localizedDescription)") completion(false) } } private func setupUDPConnection() { let parameters = NWParameters.udp parameters.includePeerToPeer = true connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupTCPConnection() { let parameters = NWParameters.tcp connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupWebSocketConnection() { guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else { print("Invalid WebSocket URL") return } let session = URLSession(configuration: .default) webSocketTask = session.webSocketTask(with: url) webSocketTask?.resume() print("WebSocket connection initiated") sendAudioToServer() receiveDataFromServer() sendWebSocketPing(after: 0.6) } private func setupNWConnectionHandlers() { connection?.stateUpdateHandler = { [weak self] state in DispatchQueue.main.async { switch state { case .ready: print("Connected (NWConnection)") self?.isConnected = true self?.failToConnect = false self?.receiveDataFromServer() self?.sendAudioToServer() case .waiting(let error), .failed(let error): print("Connection error: \(error.localizedDescription)") DispatchQueue.main.asyncAfter(deadline: .now() + 2) { self?.setupNetwork() } case .cancelled: print("NWConnection cancelled") self?.isConnected = false default: break } } } connection?.start(queue: .main) } Duplex in this context refers to two-way audio transmission simultaneously recording and sending audio while also receiving and playing back incoming audio, similar to a VoIP/SIP call. The setup works fine on the simulator, which suggests that the core logic is correct. However, since the simulator doesn’t fully replicate WatchOS hardware behavior especially for audio sessions and networking issues might arise when running on a real device. The problem likely lies in either the Watch’s actual hardware limitations, permission constraints, or specific audio session configurations. I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties From what I can see, your implementation is focused on streaming audio playback with the server. In my case, I'm looking for a slightly different approach: I want to capture audio and send buffers of a specific size to the server while playing audio simultaneously, essentially achieving full duplex streaming similar to a VOIP call. Additionally, I’d like to ensure that if no external audio route is connected, the Apple Watch speaker is used by default. Any thoughts or insights on adapting this setup for those requirements would be very welcome.
1
0
234
Apr ’25
PHFetchOptions: Full List of Supported Predicate Keys
Hi, Could anybody share the full list of supported Predicate keys for the PHFetchOptions? I'm aware of the list that is posted in the documentation: https://developer.apple.com/documentation/photos/phfetchoptions However I have reason to believe that this is not an exhaustive list and there also seem to be mistakes in this doc. i.e. isFavorite does not work but favorite does. Through some experimentation I also found that this works: NSPredicate(format: "adjustmentFormatIdentifier == 'com.pixelmatorteam.touch.x.photo.PhotosAdjustmentData.EmbeddedSlimSidecarFileInfo.compressed'") even though adjustmentFormatIdentifier is not listed as a supported key. Are there other secret keys that you are aware of? Specifically I want to filter a fetch result for edited items. Something like this: NSPredicate(format: "hasAdjustments == true") (I tried this, doesn't work) The native Photos app has such a filter which leads me to believe that there probably is a key for this: If one of the Framework Developers reads this: Could you please update the documentation page with this information? Finally if there really aren't any more secret keys, is there a way to achieve this with adjustmentFormatIdentifier? I have tried a bunch of stuff already like adjustmentFormatIdentifier != nil but for some reason that gives me the exact opposite of what I want: all the photos without edits. 🫠 Any tips on the correct syntax here would be much appreciated.
1
0
137
Apr ’25
MusicLibrary.createPlaylist() Method Causing App to Freeze Despite Proper Authorization Checks
Dear Apple Developer Community, I'm encountering a critical issue with the MusicLibrary.shared.createPlaylist() method in MusicKit that's affecting our app's core functionality. Despite implementing all recommended authorization checks, the app consistently freezes for some users when this method is called. What we've already verified before calling createPlaylist(): Network connectivity is properly checked and confirmed Apple Music authorization is explicitly requested via MusicAuthorization.request() User subscription status is verified using MusicSubscription.current.canPlayCatalogContent Despite these precautions, many users report that their app completely freezes when attempting to create a playlist. This is particularly concerning as playlist creation is a core feature of our application. User-reported workarounds (with mixed success): Some users have resolved the issue by restarting their devices or reinstalling our app Others report success after enabling "Sync Library" in Settings → Music Unfortunately, a significant number of users are still experiencing the issue even after trying both solutions above We've reviewed the MusicKit documentation thoroughly and ensured our implementation follows all best practices. Our app correctly handles permissions and uses the async/await pattern as required by the API. Is there a known issue with the createPlaylist() method that might cause it to block indefinitely? Are there additional authorization steps or settings we should be checking before calling this method? Could this be related to how MusicKit communicates with Apple Music servers? Any insights from the developer community or official guidance would be greatly appreciated as this issue is severely impacting our user experience. Thank you for your assistance
0
0
98
Apr ’25
Issues with "AVMetricEventStreamPublisher Discover Media Performance Metrics in AVFoundation" Example Code
Hi everyone! I’ve been working with AVFoundation and trying to use the AVMetricEventStreamPublisher to discover media performance metrics, as described in the Apple documentation. https://developer.apple.com/cn/videos/play/wwdc2024/10113/?time=508 However, when following the example code, I’m not getting the expected results. The performance metrics for both audio and video don’t seem to be captured properly. Has anyone successfully used this example code? If so, could you share your experience or any solutions you’ve found? Any tips or insights would be greatly appreciated. Thanks in advance! Ps. the example code: AVPlayerItem *item = ... AVMetricEventStream *eventStream = [AVMetricEventStream eventStream]; id subscriber = [[MyMetricSubscriber alloc] init]; [eventStream setSubscriber:subscriber queue:mySerialQueue] [eventStream subscribeToMetricEvent:[AVMetricPlayerItemLikelyToKeepUpEvent class]]; [eventStream subscribeToMetricEvent:[AVMetricPlayerItemPlaybackSummaryEvent class]]; [eventStream addPublisher:item];
2
0
209
Apr ’25
Crackling/Popping sound when using AVAudioUnitTimePitch
I have a simple AVAudioEngine graph as follows: AVAudioPlayerNode -> AVAudioUnitEQ -> AVAudioUnitTimePitch -> AVAudioUnitReverb -> Main mixer node of AVAudioEngine. I noticed that whenever I have AVAudioUnitTimePitch or AVAudioUnitVarispeed in the graph, I noticed a very distinct crackling/popping sound in my Airpods Pro 2 when starting up the engine and playing the AVAudioPlayerNode and unable to find the reason why this is happening. When I remove the node, the crackling completely goes away. How do I fix this problem since i need the user to be able to control the pitch and rate of the audio during playback. import AVKit @Observable @MainActor class AudioEngineManager { nonisolated private let engine = AVAudioEngine() private let playerNode = AVAudioPlayerNode() private let reverb = AVAudioUnitReverb() private let pitch = AVAudioUnitTimePitch() private let eq = AVAudioUnitEQ(numberOfBands: 10) private var audioFile: AVAudioFile? private var fadePlayPauseTask: Task<Void, Error>? private var playPauseCurrentFadeTime: Double = 0 init() { setupAudioEngine() } private func setupAudioEngine() { guard let url = Bundle.main.url(forResource: "Song name goes here", withExtension: "mp3") else { print("Audio file not found") return } do { audioFile = try AVAudioFile(forReading: url) } catch { print("Failed to load audio file: \(error)") return } reverb.loadFactoryPreset(.mediumHall) reverb.wetDryMix = 50 pitch.pitch = 0 // Increase pitch by 500 cents (5 semitones) engine.attach(playerNode) engine.attach(pitch) engine.attach(reverb) engine.attach(eq) // Connect: player -> pitch -> reverb -> output engine.connect(playerNode, to: eq, format: audioFile?.processingFormat) engine.connect(eq, to: pitch, format: audioFile?.processingFormat) engine.connect(pitch, to: reverb, format: audioFile?.processingFormat) engine.connect(reverb, to: engine.mainMixerNode, format: audioFile?.processingFormat) } func prepare() { guard let audioFile else { return } playerNode.scheduleFile(audioFile, at: nil) } func play() { DispatchQueue.global().async { [weak self] in guard let self else { return } engine.prepare() try? engine.start() DispatchQueue.main.async { [weak self] in guard let self else { return } playerNode.play() fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: true) // Ramp up volume until 1 is reached if volume >= 1 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 1 } } } } func pause() { fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: false) // Ramp down volume until 0 is reached if volume <= 0 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 0 playerNode.pause() // Shut down engine once ramp down completes DispatchQueue.global().async { [weak self] in guard let self else { return } engine.pause() } } } private func updateVolume(for x: Double, rising: Bool) -> Float { if rising { // Fade in return Float(pow(x, 2) * (3.0 - 2.0 * (x))) } else { // Fade out return Float(1 - (pow(x, 2) * (3.0 - 2.0 * (x)))) } } func setPitch(_ value: Float) { pitch.pitch = value } func setReverbMix(_ value: Float) { reverb.wetDryMix = value } } struct ContentView: View { @State private var audioManager = AudioEngineManager() @State private var pitch: Float = 0 @State private var reverb: Float = 0 var body: some View { VStack(spacing: 20) { Text("🎵 Audio Player with Reverb & Pitch") .font(.title2) HStack { Button("Prepare") { audioManager.prepare() } Button("Play") { audioManager.play() } .padding() .background(Color.green) .foregroundColor(.white) .cornerRadius(10) Button("Pause") { audioManager.pause() } .padding() .background(Color.red) .foregroundColor(.white) .cornerRadius(10) } VStack { Text("Pitch: \(Int(pitch)) cents") Slider(value: $pitch, in: -2400...2400, step: 100) { _ in audioManager.setPitch(pitch) } } VStack { Text("Reverb Mix: \(Int(reverb))%") Slider(value: $reverb, in: 0...100, step: 1) { _ in audioManager.setReverbMix(reverb) } } } .padding() } }
1
0
261
Apr ’25
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
271
Apr ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
127
Apr ’25
AVAudioRecorder loses audio recorded before interruption
Hi everyone, I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms. Problem: When the app is recording audio and an interruption occurs: I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began). On .ended, I check for .shouldResume and call audioRecorder?.record() again. The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder. Repro: Start a recording with AVAudioRecorder Simulate a system interruption (e.g., incoming call) Resume recording after the interruption Stop and inspect the output audio file Expected: Full audio (before and after interruption) should be saved. Actual: Only the audio after interruption is saved; the earlier part is missing Notes: According to the documentation, calling .record() after .pause() should resume recording into the same file. I confirmed that the file URL does not change, and I do not recreate the recorder instance. No error is thrown by the system during this process. This behavior happens consistently when the app is interrupted and resumed. Question: Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen? Thanks in advance!
1
1
374
Apr ’25
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
2
0
171
Apr ’25
iOS AVPlayer Subtitles / Captions
As of iOS 18, as far as I can tell, it appears there's still no AVPlayer options that allow users to toggle the caption / subtitle track on and off. Does anyone know of a way to do this with AVPlayer or with SwiftUI's VideoPlayer? The following code reproduces this issue. It can be pasted into an app playground. This is a random video and a random vtt file I found on the internet. import SwiftUI import AVKit import UIKit struct ContentView: View { private let video = URL(string: "https://server15700.contentdm.oclc.org/dmwebservices/index.php?q=dmGetStreamingFile/p15700coll2/15.mp4/byte/json")! private let captions = URL(string: "https://gist.githubusercontent.com/samdutton/ca37f3adaf4e23679957b8083e061177/raw/e19399fbccbc069a2af4266e5120ae6bad62699a/sample.vtt")! @State private var player: AVPlayer? var body: some View { VStack { VideoPlayerView(player: player) .frame(maxWidth: .infinity, maxHeight: 200) } .task { // Captions won't work for some reason player = try? await loadPlayer(video: video, captions: captions) } } } private struct VideoPlayerView: UIViewControllerRepresentable { let player: AVPlayer? func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.player = player controller.modalPresentationStyle = .overFullScreen return controller } func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) { uiViewController.player = player } } private func loadPlayer(video: URL, captions: URL?) async throws -> AVPlayer { let videoAsset = AVURLAsset(url: video) let videoPlusSubtitles = AVMutableComposition() try await videoPlusSubtitles.add(videoAsset, withMediaType: .video) try await videoPlusSubtitles.add(videoAsset, withMediaType: .audio) if let captions { let captionAsset = AVURLAsset(url: captions) // Must add as .text. .closedCaption and .subtitle don't work? try await videoPlusSubtitles.add(captionAsset, withMediaType: .text) } return await AVPlayer(playerItem: AVPlayerItem(asset: videoPlusSubtitles)) } private extension AVMutableComposition { func add(_ asset: AVAsset, withMediaType mediaType: AVMediaType) async throws { let duration = try await asset.load(.duration) try await asset.loadTracks(withMediaType: mediaType).first.map { track in let newTrack = self.addMutableTrack(withMediaType: mediaType, preferredTrackID: kCMPersistentTrackID_Invalid) let range = CMTimeRangeMake(start: .zero, duration: duration) try newTrack?.insertTimeRange(range, of: track, at: .zero) } } }
0
0
115
Apr ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
0
0
113
Apr ’25
MusicKit - Skipping Forwards or Backwards does not update
Hello everyone, I am working on an app that allows you to review your own music using Apple Music. Currently I am running into an issue with the skipping forwards and backwards outside of the app. How it should work: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song on an album should play and the information should change to reflect that in the app. If you play a song in Apple Music, you can see a Now Playing view in the lock screen. When you skip forward or backwards, it will do either action and it would reflect that when you see a little frequency icon on artwork image of a song. What it's doing: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song is reflected outside of the app, but not in the app. When skipping a song outside of the app, it works correctly to head to the next song. But when I return to the app, it is not reflected NOTE: I am not using MusicKit variables such as Track, Album to display the songs. Since I want to grab the songs and review them I need a rating so I created my own that grabs the MusicItemID, name, artist(s), etc. NOTE: I am using ApplicationMusicPlayer.shared Is there a way to get the song to reflect in my app? (If its easier, a simple example of it would be nice. No need to create an entire xprod file)
0
0
99
Apr ’25
NonLiDAR Photogrammetry
Hello! In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this? In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated. Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
0
0
88
Apr ’25
Apple News Publisher: How To Successfully Apply
Hi there, Can anyone tell me how to possibly get approved as an Apple News Publisher in 2025? We attempted in 2024, but received this message from Apple support: "Thank you for your interest in Apple News. At this time, we're not accepting new applications." When I inquired further, I got this second response: "Apple News is no longer accepting unsolicited applications. To learn more about Apple News requirements, visit the Apple News support page. If you have any feedback, please use this form to send us your comments. Keep in mind that while we read all feedback, we are unable to respond to each submission individually." My questions are: Is this still the case? (Especially when you are a legit local news outlet) Is there a link to apply as a news publisher? I don't seem to have that option at all. Thanks for any feedback.
0
1
121
Apr ’25
AVAssetWriterInputTaggedPixelBufferGroupAdaptor Hanging With Tagged Buffers
We've successfully implemented an AVAssetWriter to produce HLS streams (all code is Objective-C++ for interop with existing codebase) but are struggling to extend the operations to use tagged buffers. We're starting to wonder if the tagged buffers required for an MV-HEVC signal are fully supported when producing HLS segments in a live-stream setting. We generate a live stream of data using something like: UTType *t = [UTType typeWithIdentifier:AVFileTypeMPEG4]; m_writter = [[AVAssetWriter alloc] initWithContentType:t]; // - videoHint describes HEVC and width/height // - m_videoConfig includes compression settings and, when using MV-HEVC, // the correct keys are added (i.e. kVTCompressionPropertyKey_MVHEVCVideoLayerIDs) // The app was throwing an exception without these which was // useful to know when we got the configuration right. m_video = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:m_videoConfig sourceFormatHint:videoHint]; For either path we're producing CVPixelBufferRefs that contain the raw pixel information (i.e. 32BGRA) so we use an adapter to make that as simple as possible. If we use a single view and a AVAssetWriterInputPixelBufferAdaptor things work out very well. We produce segments and the delegate is called. However, if we use the AVAssetWriterInputTaggedPixelBufferGroupAdaptor as exampled in the SideBySideToMVHEVC demo project, things go poorly. We create the tagged buffers with something like: CMTagCollectionRef collections[2]; CMTag leftTags[] = { CMTagMakeWithSInt64Value( kCMTagCategory_VideoLayerID, (int64_t)0), CMTagMakeWithSInt64Value( kCMTagCategory_StereoView, kCMStereoView_LeftEye) }; CMTagCollectionCreate( kCFAllocatorDefault, leftTags, 2, &(collections[0]) ); CMTag rightTags[] = { CMTagMakeWithSInt64Value( kCMTagCategory_VideoLayerID, (int64_t)1), CMTagMakeWithSInt64Value( kCMTagCategory_StereoView, kCMStereoView_RightEye) }; CMTagCollectionCreate( kCFAllocatorDefault, rightTags, 2, &(collections[1]) ); CFArrayRef tagCollections = CFArrayCreate( kCFAllocatorDefault, (const void **)collections, 2, &kCFTypeArrayCallBacks ); CVPixelBufferRef buffers[] = {*b, *alt}; CFArrayRef b = CFArrayCreate( kCFAllocatorDefault, (const void **)buffers, 2, &kCFTypeArrayCallBacks ); CMTaggedBufferGroupRef bufferGroup; OSStatus res = CMTaggedBufferGroupCreate( kCFAllocatorDefault, tagCollections, b, &bufferGroup ); Perhaps there's something about this OBJC code that I've buggered up? Hopefully! Anyways, when I submit this tagged bugger group to the adaptor: if (![mvVideoAdapter appendTaggedPixelBufferGroup:bufferGroup withPresentationTime:pts]) { // report error... } Appending does not raise any errors - eventually it just hangs on us and we never return from it... Real Issue: So either: The delegate assigned to the AVAssetWriter doesn't fire its assetWriter callback which should produce the segments The adapter hangs on the appendTaggedPixelBufferGroup before a segment is ready to be completed (but succeeds for a number of buffer groups before this happens). This is the same delegate class that's assigned to the non multi view code path if MV-HEVC is turned off which works perfectly.
1
0
118
Apr ’25
Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
3
0
174
Apr ’25
Screen recording audio and video out of sync
I use startCaptureWithHandler to record screen and AVAssetWriter appendSampleBuffer: to save audio and video ,but when played the saved file audio and video are out of sync. I don t know if it s a AVAssetWriterInputr setup problem,here is my code NSDictionary *audioCompressionSettings = @{ AVEncoderBitRatePerChannelKey : @(64000), AVFormatIDKey : @(kAudioFormatMPEG4AAC), AVNumberOfChannelsKey : @(2), AVSampleRateKey : @(44100) }; AVAssetWriterInput *audioAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings]; audioAssetWriterInput.expectsMediaDataInRealTime = YES; [_assetWriter addInput:audioAssetWriterInput]; NSDictionary *videoCompressSetting = @{AVVideoAverageBitRateKey:@(screenWidth*screenHeight*5), AVVideoMaxKeyFrameIntervalKey:@(30), AVVideoProfileLevelKey : AVVideoProfileLevelH264MainAutoLevel}; NSDictionary *codecSetting = @{AVVideoCodecKey:AVVideoCodecTypeH264, AVVideoScalingModeKey : AVVideoScalingModeResize, AVVideoWidthKey:@(screenWidth*2), AVVideoHeightKey:@(screenHeight*2), AVVideoCompressionPropertiesKey:videoCompressSetting }; AVAssetWriterInput* videoAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:codecSetting]; videoAssetWriterInput.expectsMediaDataInRealTime = YES; [_assetWriter addInput:videoAssetWriterInput];
1
0
139
Apr ’25