PLATFORM AND VERSION :iOS 18.5
I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view.
This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app.
Any help/suggestion to this issue would be greatly appreciated.
STEPS TO REPRODUCE
Open the app and click on camera icon.
It will display camera to capture photo.
Camera shows black for few production user's.
class ViewController: UIViewController {
@IBOutlet private weak var captureButton: UIButton!
private var fillLayer: CAShapeLayer!
private var previewLayer : AVCaptureVideoPreviewLayer!
private var output: AVCapturePhotoOutput!
private var device: AVCaptureDevice!
private var session : AVCaptureSession!
private var highResolutionEnabled: Bool = false
private let sessionQueue = DispatchQueue(label: "session queue")
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
customiseUI()
}
@IBAction func startCamera(sender: UIButton) {
didTapTakePhoto()
}
private func setupCamera() {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.high
previewLayer = AVCaptureVideoPreviewLayer(session: session)
output = AVCapturePhotoOutput()
device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
if let device = self.device{
do{
let input = try AVCaptureDeviceInput(device: device)
if session.canAddInput(input){ session.addInput(input)}
else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") }
if session.canAddOutput(output){
output.isHighResolutionCaptureEnabled = self.highResolutionEnabled
session.addOutput(output)
} else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") }
previewLayer.videoGravity = .resizeAspectFill
previewLayer.session = session
sessionQueue.async { session.startRunning() }
self.session = session
self.session.accessibilityElementIsFocused()
try device.lockForConfiguration()
if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) {
device.whiteBalanceMode = .autoWhiteBalance
} else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") }
if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) {
device.whiteBalanceMode = .continuousAutoWhiteBalance
} else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") }
if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus}
else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus }
device.unlockForConfiguration()
} catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") }
} else { print("\(#fileID):\(#function):\(#line) : Device found as nil") }
}
private func customiseUI() {
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0)
let rectangleWidth = view.frame.width - (view.frame.width * 0.16)
let x = (view.frame.width - rectangleWidth) / 2
let rectangleHeight = view.frame.height - (view.frame.height * 0.16)
let y = (view.frame.height - rectangleHeight) / 2
let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0))
roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y))
path.append(roundRect)
path.usesEvenOddFillRule = true
fillLayer = CAShapeLayer()
fillLayer.path = path.cgPath
fillLayer.fillRule = .evenOdd
fillLayer.opacity = 0.4
previewLayer.addSublayer(fillLayer)
previewLayer.frame = view.bounds
view.layer.addSublayer(previewLayer)
view.bringSubviewToFront(captureButton)
}
private func didTapTakePhoto() {
let settings = self.getSettings(camera: self.device)
if device.isAdjustingFocus {
do {
try device.lockForConfiguration()
device.focusMode = .continuousAutoFocus
device.unlockForConfiguration()
device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil)
} catch { print(error) }
} else { output.capturePhoto(with: settings, delegate: self) }
}
func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings {
var settings = AVCapturePhotoSettings()
if let rawFormat = output.availableRawPhotoPixelFormatTypes.first {
settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat))
}
settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any]
settings.previewPhotoFormat = previewFormat
return settings
}
}
extension ViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
AudioServicesDisposeSystemSoundID(1108)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let data = photo.fileDataRepresentation() else { return }
let image = UIImage(data: data)!
showImage(cropped: image)
}
func showImage(cropped: UIImage) {
let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController
vc?.captured = cropped
self.present(vc!, animated: true)
}
}```
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Our streaming app uses FairPlay-protected video streams, which previously worked fine when using AVAssetResourceLoaderDelegate to provide CKCs.
Recently, we migrated to AVContentKeySession, and while everything works as expected during regular playback, we encountered an issue with AirPlay.
Our CKC has a 120-second expiry, so we renew it by calling renewExpiringResponseData..
This trigger the didProvideRenewingContentKeyRequest delegate and we respond with updated CKC.
However, when streaming via AirPlay, both video and audio freeze exactly after 120 seconds.
To validate the issue, I tested with AVAssetResourceLoaderDelegate and found that I can reproduce the same freeze if I do not renew the key. This suggests that AirPlay is not accepting the renewed CKC when using AVContentKeySession.
Additional Details:
This issue occurs across different iOS versions and various AirPlay devices.
The same content plays without issues when played directly on the device.
The renewal process is successful, and segments continue to load, but playback remains frozen.
Tried renewing the CKC bit early (100s).
I also tried setting player.usesExternalPlaybackWhileExternalScreenIsActive = true, but the issue persists.
We don't use persistentKey.
Is there anything else that needs to be considered for proper key renewal when AirPlaying?
Any help on how to fix this or confirmation if this is a known issue would be greatly appreciated.
I am trying to use SpeechTranscriber from Speech framework. Is it possible to use it on Simulator of iOS 26 (Mac OS Tahoe)? Function "supportedLocales" returns an empty array.
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart.
The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo.
What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled.
Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first).
I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described.
https://github.com/timboudreau/ios-play-pause-demo
I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown.
What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face.
After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue.
iPad (9th Gen)
iPad air (4th Gen)
iPhone 15
This issue has not occur on any other devices I have.
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people.
The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes.
Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
Hi Apple Developer Forums,
I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage.
What’s slow (important detail)
I’m measuring the time for:
let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint)
let ciImage = rawFilter.outputImage
This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup).
Observed behavior
First time accessing CIRAWFilter.outputImage: ~3 seconds
Second time (same app session, similar RAW): ~3 milliseconds
So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.).
Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern.
Warm-up attempts using bundled DNG
I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo:
Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s
Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms
This helps, but it’s still far from the steady-state ~3ms.
Warm-up by capturing a real RAW (works, but concerns)
The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing.
However:
In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX.
I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded.
Questions
Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)?
Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo?
Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter?
Attachments
Figure 1: First RAW capture with no warm-up (~3s outputImage time)
Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms)
Thanks for any guidance or experience sharing!
I'm getting this error when I launch my application on the iPhone 14 Pro via Xcode. Everything builds OK. I"m using the audio kit plugin and Sound Pipe Audiokit.
The error starts as soon as I start the app and will carry on repeatedly.
I have background processing turned on as I'd like the sounds to play when the phone is locked via the headphones.
I can't find anything online about this error. None of my catches are printing anything in the logs either. So I don't know if this is just something that pops up repeatedly or whether there is something fundamentally wrong.
private func setupAudioSession() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [.mixWithOthers])
try session.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
errorMessage = "Failed to set up audio session: (error.localizedDescription)"
print(errorMessage ?? "")
}
}
// MARK: - Background Task Handling
private func setupBackgroundTaskHandling() {
// Handle app entering background
notificationObservers.append(
NotificationCenter.default.addObserver(
forName: UIApplication.didEnterBackgroundNotification,
object: nil,
queue: .main,
using: { [weak self] _ in
// Safely unwrap self
guard let self = self else { return }
self.handleBackgroundTransition()
}
)
)
I'm not sure if this is the code causing the issue. Any help would be gratefully appreciated. This is my first app I'm working on .
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I'm new to the Swift MusicKit API and am starting with the implementation in iOS 16.
I'm getting stuck on an issue where there is no background or text color associated with the Artwork object. Is this something you have to make an additional property request for, and if so, how do you do that?
var catalogSearch = MusicCatalogResourceRequest<Album>(matching: \.id, equalTo: item.id)
let catalogResponse = try await request.response()
guard let firstItem = catalogResponse.items.first else {
return
}
In this example, firstItem.artwork only contains the url and what look like incorrect max width/height values.
here's a printout of firstItem.artwork
Optional(Artwork(
urlFormat: "musicKit://artwork/library/5F37858D-F46B-4F12-BA67-40FA8DD63D87/{w}x{h}?at=item&fat=&id=7718670444435992305&lid=5F37858D-F46B-4F12-BA67-40FA8DD63D87&mt=music&aat=Music122/v4/37/25/f5/3725f515-249f-7b91-77bb-f479cd48201c/22UMGIM32254.rgb.jpg",
maximumWidth: 0,
maximumHeight: 0
))
Using the PushToTalk library, call requestBeginTransmitting (channelUUID: UUID) on a Bluetooth device and then use the PTChannelManagerial Delegate proxy method channelManager:(PTChannelManager *)channelManager didActivateAudioSession:(AVAudioSession *)audioSession Start recording sound inside. Completed recording
I am trying to Build server for testing on Linux(Alma linux 9 VM)
NAME="AlmaLinux"
VERSION="9.7 (Moss Jungle Cat)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.7"
PLATFORM_ID="platform:el9"
PRETTY_NAME="AlmaLinux 9.7 (Moss Jungle Cat)"
ANSI_COLOR="0;34"
[azuki@AlmaDevVM ~]$ uname -m
x86_64
I have tried the following steps:
Before starting, ensured that Swift 6 installed. Referred https://www.swift.org/install/ for instructions.
Build the library
In Terminal, uses the following commands to compile the Swift library:
cd Development/Key_Server_Module/Swift
swift build -Xbuild-tools-swiftc -DTEST_CREDENTIALS
After building the library, ran test cases to ensure the library behaves as expected. ALL unit tests are passing with the development credentials.
• Since I was using an x86_64 machine:
export LD_LIBRARY_PATH=./Sources/prebuilt/x86_64-unknown-linux-gnu/
Run all tests:
swift test -Xbuild-tools-swiftc -DTEST_CREDENTIALS --disable-swift-testing
Build the server
Build the server: Apache
Before starting, ensured the following:
a. Installed Apache HTTPD and the dev tools. Using the following command for installation:
yum install httpd httpd-devel redhat-rpm-config
b. After this, integrated it into the Apache server
environment with swift library that was built above. Used the following command to build the server using apxs:
• Since I was using an x86_64 machine:
apxs -i -a -c
-Wl,-L${PWD}/.build/x86_64-unknown-linux-gnu/debug/
-Wl,-lswift_fpssdk
-Wl,-L${PWD}/Sources/prebuilt/x86_64-unknown-linux-gnu -lfpscrypto
-Wl,-R${PWD}/.build/x86_64-unknown-linux-gnu/debug
server_setup/mod_fps.c
c. Next, copied the dependent libraries to the Apache modules folder using these commands:
• If using an x86_64 machine:
cp Sources/prebuilt/x86_64-unknown-linux-gnu/libfpscrypto.so /usr/lib64/httpd/modules/libfpscrypto.so
cp .build/x86_64-unknown-linux-gnu/debug/libswift_fpssdk.so /usr/lib64/httpd/modules/libswift_fpssdk.so
d. Configuring Apache HTTPD
Configured Apache HTTPD by adding the module and handler to your Apache
HTTPD configuration (/etc/httpd/conf/httpd.conf). Note that the apxs command
may automatically add the LoadModule line in the previous step.
Listen 8080
LoadFile /usr/lib64/httpd/modules/libfpscrypto.so
LoadFile /usr/lib64/httpd/modules/libswift_fpssdk.so
LoadModule fps_module /usr/lib64/httpd/modules/mod_fps.so
<Location "/fps">
SetHandler fps_handler
Copy the credentials to the Apache modules folder.
cp -r ../credentials /usr/lib64/httpd/modules/
export FPS_CERT_PATH=
/usr/lib64/httpd/modules/credentials/test_certificates.json
e. Run your server
You can run the Apache HTTPD server with the configured module by using the following command:
httpd -D FOREGROUND
No issues see till step.
Get SDK version
[azuki@AlmaDevVM Key_Server_Module]$ curl localhost:8080/fps/v
26.0.0
But when i try to generate license
[azuki@AlmaDevVM Key_Server_Module]$ curl -d ../Test_Inputs/iOS/spc_ios_hd_lease_2048.json localhost:8080/fps
{"fairplay-streaming-response":{"create-ckc":[{"id":1,"status":-42601}]}}
Can you please suggest what i might be missing here?
I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error
Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule')
Below is how I am using it
let speechDetector = Speech.SpeechDetector()
let transcriber = SpeechTranscriber(locale: Locale.current,
transcriptionOptions: [],
reportingOptions: [.volatileResults],
attributeOptions: [.audioTimeRange])
speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])
I'm experiencing a significant limitation with MusicKit's Dolby Atmos implementation on macOS and would appreciate clarification on whether this is intended behavior or if there are solutions available.
When streaming Dolby Atmos content through MusicKit's ApplicationMusicPlayer, the output is limited to 2-channel stereo, even when:
Audio MIDI Setup is configured for 7.1.4 (12-channel) output
The same tracks play in full multichannel through the native Apple Music app
Dolby Atmos is set to "Automatic" in Apple Music preferences
Please let me know if there is anyway to enable this. If not, is this documented anywhere? Thanks!
Hello, our application is unable to HDMI output FairPlay protected content to TV via official Lightning HDMI AV Adapter, by checking the console log on mediaplayerd it is found that a CoreMediaErrorDomain Code=-19156 is raised, but we are unable to know what this error code means.
default 11:18:15.121584+0800 mediaplaybackd keyboss ckb_customURLReadCallback: 0x7fa62f800 60/0 customURLReqID 4 isComplete 1 err -19156 error <private> (0) dokeyCallbacksExist 0
default 11:18:15.121670+0800 mediaplaybackd keyboss ckb_processErrorForRequest: 0x7fa62f800 60/0 handler 4 err 0
default 11:18:15.121752+0800 mediaplaybackd <<<< FigCustomURLHandling >>>> curll_cancelRequestOnQueue: 0x7fa031360: requestID: 4
default 11:18:15.121932+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 reqFin err Error Domain=CoreMediaErrorDomain Code=-19156 (-19156) dokeyCallbacksExist 0
default 11:18:15.122025+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 retry
default 11:18:15.123195+0800 mediaplaybackd <<<< FigCPECryptorPKD >>>> PostKeyRequestErrorOccurred: 0x7fab7be80 029592C2-093D-400D-B57F-7AB06CC292D1 key request error: Error Domain=CoreMediaErrorDomain Code=-19160 (-19160)
I occasionally receive reports from users that photo import from the Photos library gets stuck and the progress appears to stop indefinitely.
I’m using the following APIs:
func fetchAsset(_ asset: PHAsset) {
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.isSynchronous = false
options.isNetworkAccessAllowed = true
options.progressHandler = { (progress, error, stop, info) in
// 🚨 never called
}
let requestId = PHImageManager.default().requestImageDataAndOrientation(
for: asset,
options: options
) { data, _, _, info in
// 🚨 never called
}
}
Due to repeated reports, I added detailed logs inside the callback closures. Based on the logs, it looks like the request keeps waiting without any callbacks being invoked — neither the progressHandler nor the completion block of requestImageDataAndOrientation is called.
This happens not only with the PHImageManager approach, but also when using PHAsset with PHContentEditingInputRequestOptions — the completion callback is not invoked as well.
func fetchAssetByContentEditingInput(_ asset: PHAsset) {
let options = PHContentEditingInputRequestOptions()
options.isNetworkAccessAllowed = true
asset.requestContentEditingInput(with: nil) { contentEditingInput, info in
// 🚨 never called
}
}
I suspect this is related to iCloud Photos.
Here is what I confirmed from affected users:
Using the native picker (My app also provides the native picker as an alternative option for attaching photos), iCloud download proceeds normally and the photo can be attached. However, using the PHImageManager-based approach in my app, the same photo cannot be attached.
Even after verifying that the photo has been fully downloaded from iCloud (e.g., by trying “Export Unmodified Originals” in the Photos app as described here: https://support.apple.com/en-us/111762, and confirming the iCloud download progress completed), the callback is still not invoked for that asset.
Detailed flow for (1):
I asked the user to attach the problematic photo (the one where callbacks never fire) using the native photo picker (UIImagePickerController).
The UI showed “Downloading from iCloud” progress.
The progress advanced and the photo was attached successfully.
Then I asked the user to attach the same photo again using my custom photo picker (which uses the PHImageManager APIs mentioned above).
The progress did not advance (No callbacks were invoked).
The operation waited indefinitely and never completed.
Workaround / current behavior:
If I ask users to reboot the device and try again, about 6 out of 10 users can attach successfully afterward.
The remaining ~4 out of 10 users still cannot attach even after rebooting.
For users who are not fixed immediately after reboot, it seems to resolve naturally after some time.
I’ve seen similar reports elsewhere, so I’m wondering if Apple is already aware of an internal issue related to this. If there is any known information, guidance, or recommended workaround, I would appreciate it.
I also logged the properties of affected PHAssets (metadata) when the issue occurs, and I can share them below if that helps troubleshooting:
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator.
Here’s a simplified Swift example of my setup:
import SwiftUI
import WebKit
struct ContentView: View {
private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN"
var body: some View {
YouTubeWebView(playlistID: playlistID)
.edgesIgnoringSafeArea(.all)
}
}
struct YouTubeWebView: UIViewRepresentable {
let playlistID: String
func makeUIView(context: Context) -> WKWebView {
let config = WKWebViewConfiguration()
config.allowsInlineMediaPlayback = true
let webView = WKWebView(frame: .zero, configuration: config)
webView.scrollView.isScrollEnabled = true
let html = """
<!doctype html>
<html>
<head>
<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0">
<style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style>
</head>
<body>
<iframe
src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3"
frameborder="0"
allow="encrypted-media; picture-in-picture; fullscreen"
webkit-playsinline
allowfullscreen
></iframe>
</body>
</html>
"""
webView.loadHTMLString(html, baseURL: nil)
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {}
}
#Preview {
ContentView()
}
Observed behavior:
First video plays without issue.
Web process crashes when the second video in the playlist starts.
Console logs show WebProcessProxy::didClose and repeated memory status messages.
Using ProcessAssertion or background activity does not prevent the crash.
Only occurs on physical devices; simulators do not reproduce the issue.
Questions:
Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices?
Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback?
Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes?
Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback?
Thanks in advance for any guidance!
I'd like to find out: Can backgrounded apps record audio?
In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything.
However I'm not familiar with the current state of affairs.
With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone?
Thanks.
Topic:
Media Technologies
SubTopic:
Audio
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already:
• Display detailed scrolling waveforms of Apple Music songs
• Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions:
1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content?
2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access?
3. Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
Hi all,
I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor.
On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback.
One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
Topic:
Media Technologies
SubTopic:
Video
I was testing audio playback from YouTube in Safari, and the sound was clipping heavily. At first, I thought it might be due to the poor quality of my small sound system. However, when I took a screenshot and the screenshot sound effect itself produced a loud clipping noise, it became clear that this is not a mechanical problem with my speakers, nor an issue specific to YouTube or Safari. This appears to be a system-wide audio issue in macOS Tahoe 26 - Beta 5.