Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Core Model Editor and Params
Optimal Precision • Current Precision: Mixed (Float32, int32) • Optimal Precision: Not specified in the image, but typically involves using the most efficient data type for the model's operations to balance speed and memory usage without significant loss of accuracy. Comparison: • Mixed Precision: Utilizes both Float32 and int32 to optimize performance. Float32 provides high precision, while int32 reduces memory usage and increases computational speed. • Optimal Precision: Aimed at achieving the best trade-off between performance and accuracy, potentially using other data types like Float16 (bfloat16) for even greater efficiency in certain hardware environments. Operation Distribution • Current Distribution: • iOS18.mul: 168 • iOS18.transpose: 126 • iOS18.linear: 98 • iOS18.add: 97 • iOS18.sliceByIndex: 96 • iOS18.expandDims: 74 • iOS18.concat: 72 • iOS18.squeeze: 72 • iOS18.reshape: 67 • iOS18.layerNorm: 49 • iOS18.matmul: 48 • iOS18.gelu: 26 • iOS18.softmax: 24 • Split: 24 • conv: 1 • iOS18.conv: 1 Comparison: • Operation Count: Indicates how frequently each operation is executed. High counts for operations like mul, transpose, and linear suggest these are computationally intensive parts of the model. • Optimization Opportunities: Reducing the count of high-frequency operations or optimizing their execution can improve performance. This might involve pruning unnecessary operations, optimizing algorithms, or leveraging hardware acceleration. General Recommendations • Precision Tuning: Experiment with different precision levels to find the best balance for your specific hardware and accuracy requirements. • Operation Optimization: Focus on optimizing the most frequent operations. Techniques include using more efficient algorithms, parallelizing computations, or utilizing specialized hardware like GPUs or TPUs. • Benchmarking: Regularly benchmark the model to assess the impact of changes and ensure that optimizations lead to meaningful performance improvements. By focusing on these areas, you can potentially enhance the efficiency and performance of your ML model.
0
0
40
1w
Vision Framework VNTrackObjectRequest: Minimum Valid Bounding Box Size Causing Internal Error (Code=9)
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest. Occasionally (but not always), I receive the following runtime error: Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size} From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest. Could someone clarify: What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects? Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request? This information would be extremely helpful to reliably avoid this internal error. Thank you!
0
0
130
Apr ’25
tensorflow-metal ReLU activation fails to clip negative values on M4 Apple Silicon
Environment: Hardware: Mac M4 OS: macOS Sequoia 15.7.4 TensorFlow-macOS Version: 2.16.2 TensorFlow-metal Version: 1.2.0 Description: When using the tensorflow-metal plug-in for GPU acceleration on M4, the ReLU activation function (both as a layer and as an activation argument) fails to correctly clip negative values to zero. The same code works correctly when forced to run on the CPU. Reproduction Script: import os import numpy as np import tensorflow as tf # weights and biases = -1 weights = [np.ones((10, 5)) * -1, np.ones(5) * -1] # input = 1 data = np.ones((1, 10)) # comment this line => GPU => get negative values # uncomment this line => CPU => no negative values # tf.config.set_visible_devices([], 'GPU') # create model model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(10,)), tf.keras.layers.Dense(5, activation='relu') ]) # set weights model.layers[0].set_weights(weights) # get output output = model.predict(data) # check if negative is present print(f"min value: {output.min()}") print(f"is negative present? {np.any(output < 0)}")
1
0
108
7h
Memory stride warning when loading CoreML models on ANE
When I am doing an uncached load of CoreML model on ANE, I received this warning in Xcode console Type of hiddenStates in function main's I/O contains unknown strides. Using unknown strides for MIL tensor buffers with unknown shapes is not recommended in E5ML. Please use row_alignment_in_bytes property instead. Refer to https://e5-ml.apple.com/more-info/memory-layouts.html for more information. However, the web link does not seem to be working. Where can I find more information about about this and how can I fix it?
1
0
265
Jul ’25
How to implement a CoreML model into an iOS app properly?
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1) My code is below: https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift I couldn't add the code as it was too long so I hope github link is fine!
1
0
168
Mar ’25
Apple OCR framework seems to be holding on to allocations every time it is called.
Environment: macOS 26.2 (Tahoe) Xcode 16.3 Apple Silicon (M4) Sandboxed Mac App Store app Description: Repeated use of VNRecognizeTextRequest causes permanent memory growth in the host process. The physical footprint increases by approximately 3-15 MB per OCR call and never returns to baseline, even after all references to the request, handler, observations, and image are released. ` private func selectAndProcessImage() { let panel = NSOpenPanel() panel.allowedContentTypes = [.image] panel.allowsMultipleSelection = false panel.canChooseDirectories = false panel.message = "Select an image for OCR processing" guard panel.runModal() == .OK, let url = panel.url else { return } selectedImageURL = url isProcessing = true recognizedText = "Processing..." // Run OCR on a background thread to keep UI responsive let workItem = DispatchWorkItem { let result = performOCR(on: url) DispatchQueue.main.async { recognizedText = result isProcessing = false } } DispatchQueue.global(qos: .userInitiated).async(execute: workItem) } private func performOCR(on url: URL) -> String { // Wrap EVERYTHING in autoreleasepool so all ObjC objects are drained immediately let resultText: String = autoreleasepool { // Load image and convert to CVPixelBuffer for explicit memory control guard let imageData = try? Data(contentsOf: url) else { return "Error: Could not read image file." } guard let nsImage = NSImage(data: imageData) else { return "Error: Could not create image from file data." } guard let cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return "Error: Could not create CGImage." } let width = cgImage.width let height = cgImage.height // Create a CVPixelBuffer from the CGImage var pixelBuffer: CVPixelBuffer? let attrs: [String: Any] = [ kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true ] let status = CVPixelBufferCreate( kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs as CFDictionary, &pixelBuffer ) guard status == kCVReturnSuccess, let buffer = pixelBuffer else { return "Error: Could not create CVPixelBuffer (status: \(status))." } // Draw the CGImage into the pixel buffer CVPixelBufferLockBaseAddress(buffer, []) guard let context = CGContext( data: CVPixelBufferGetBaseAddress(buffer), width: width, height: height, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer), space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue ) else { CVPixelBufferUnlockBaseAddress(buffer, []) return "Error: Could not create CGContext for pixel buffer." } context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height)) CVPixelBufferUnlockBaseAddress(buffer, []) // Run OCR let requestHandler = VNImageRequestHandler(cvPixelBuffer: buffer, options: [:]) let request = VNRecognizeTextRequest() request.recognitionLevel = .accurate request.usesLanguageCorrection = true do { try requestHandler.perform([request]) } catch { return "Error during OCR: \(error.localizedDescription)" } guard let observations = request.results, !observations.isEmpty else { return "No text found in image." } let lines = observations.compactMap { observation in observation.topCandidates(1).first?.string } // Explicitly nil out the pixel buffer before the pool drains pixelBuffer = nil return lines.joined(separator: "\n") } // Everything — Data, NSImage, CGImage, CVPixelBuffer, VN objects — released here return resultText } `
0
0
121
2w
Foundation Models framework dyld symbol errors after macOS 26 Beta 2 - LanguageModelSession constructor missing
Foundation Models framework worked perfectly on macOS 26 Beta 2, but starting from Beta 3 and continuing through Beta 6 (latest), I get dyld symbol errors even with the exact code from Apple's documentation. Environment: macOS 26.0 Beta 6 (25A5351b) Xcode 26 Beta 6 M4 Max MacBook Pro Apple Intelligence enabled and downloaded Error Details: dyld[Process]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /path/to/app.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Code Used (Exact from Documentation): import FoundationModels // This worked on Beta 2, crashes on Beta 3+ let model = SystemLanguageModel.default let session = LanguageModelSession(model: model) let response = try await session.respond(to: "Hello") What I've Verified: FoundationModels.framework exists in /System/Library/Frameworks/ Framework is properly linked in Xcode project Apple Intelligence is enabled and working Same code works in older beta versions Issue persists even with completely fresh Xcode projects Analysis: The dyld error suggests the LanguageModelSession(model:) constructor is missing. The symbol shows it's looking for a constructor with parameters (model:guardrails:tools:instructions:), but the documentation still shows the simple (model:) constructor. Questions: Has the LanguageModelSession API changed since Beta 2? Should we now use the constructor with guardrails/tools/instructions parameters? Is this a known issue with recent betas? Are there updated code samples for the current API? Additional Context: This affects both basic SystemLanguageModel usage AND custom adapter loading. The same dyld symbol errors occur when trying to create SystemLanguageModel(adapter: adapter) as well. Any guidance on the correct API usage for current betas would be greatly appreciated. The documentation appears to be out of sync with the actual framework implementation.
1
0
681
Sep ’25
Siri not calling my INExtension
Things I did: created an Intents Extension target added "Supported Intents" to both my main app target and the intent extension, with "INAddTasksIntent" and "INCreateNoteIntent" created the AppIntentVocabulary in my main app target created the handlers in the code in the Intents Extension target class AddTaskIntentHandler: INExtension, INAddTasksIntentHandling { func resolveTaskTitles(for intent: INAddTasksIntent) async -> [INSpeakableStringResolutionResult] { if let taskTitles = intent.taskTitles { return taskTitles.map { INSpeakableStringResolutionResult.success(with: $0) } } else { return [INSpeakableStringResolutionResult.needsValue()] } } func handle(intent: INAddTasksIntent) async -> INAddTasksIntentResponse { // my code to handle this... let response = INAddTasksIntentResponse(code: .success, userActivity: nil) response.addedTasks = tasksCreated.map { INTask( title: INSpeakableString(spokenPhrase: $0.name), status: .notCompleted, taskType: .completable, spatialEventTrigger: nil, temporalEventTrigger: intent.temporalEventTrigger, createdDateComponents: DateHelper.localCalendar().dateComponents([.year, .month, .day, .minute, .hour], from: Date.now), modifiedDateComponents: nil, identifier: $0.id ) } return response } } class AddItemIntentHandler: INExtension, INCreateNoteIntentHandling { func resolveTitle(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult { if let title = intent.title { return INSpeakableStringResolutionResult.success(with: title) } else { return INSpeakableStringResolutionResult.needsValue() } } func resolveGroupName(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult { if let groupName = intent.groupName { return INSpeakableStringResolutionResult.success(with: groupName) } else { return INSpeakableStringResolutionResult.needsValue() } } func handle(intent: INCreateNoteIntent) async -> INCreateNoteIntentResponse { do { // my code for handling this... let response = INCreateNoteIntentResponse(code: .success, userActivity: nil) response.createdNote = INNote( title: INSpeakableString(spokenPhrase: itemName), contents: itemNote.map { [INTextNoteContent(text: $0)] } ?? [], groupName: INSpeakableString(spokenPhrase: list.name), createdDateComponents: DateHelper.localCalendar().dateComponents([.day, .month, .year, .hour, .minute], from: Date.now), modifiedDateComponents: nil, identifier: newItem.id ) return response } catch { return INCreateNoteIntentResponse(code: .failure, userActivity: nil) } } } uninstalled my app restarted my physical device and simulator Yet, when I say "Remind me to buy dog food in Index" (Index is the name of my app), as stated in the examples of INAddTasksIntent, Siri proceeds to say that a list named "Index" doesn't exist in apple Reminders app, instead of processing the request in my app. Am I missing something?
1
0
80
2w
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
0
0
712
Sep ’25
A specific mlmodelc model runs on iPhone 15, but not on iPhone 16
As we described on the title, the model that I have built completely works on iPhone 15 / A16 Bionic, on the other hand it does not run on iPhone 16 / A18 chip with the following error message. E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED. E5RT: MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED (11) It consumes 1.5 ~ 1.6 GB RAM on the loading the model, then the consumption is decreased to less than 100MB on the both of iPhone 15 and 16. After that, only on iPhone 16, the above error is shown on the Xcode log, the memory consumption is surged to 5 to 6GB, and the system kills the app. It works well only on iPhone 15. This model is built with the Core ML tools. Until now, I have tried the target iOS 16 to 18 and the compute units of CPU_AND_NE and ALL. But any ways have not solved this issue. Eventually, what kindof fix should I do? minimum_deployment_target = ct.target.iOS18 compute_units = ct.ComputeUnit.ALL compute_precision = ct.precision.FLOAT16
2
0
216
May ’25
ImagePlayground: Programmatic Creation Error
Hardware: Macbook Pro M4 Nov 2024 Software: macOS Tahoe 26.0 & xcode 26.0 Apple Intelligence is activated and the Image playground macOS app works Running the following on xcode throws ImagePlayground.ImageCreator.Error.creationFailed Any suggestions on how to make this work? import Foundation import ImagePlayground Task { let creator = try await ImageCreator() guard let style = creator.availableStyles.first else { print("No styles available") exit(1) } let images = creator.images( for: [.text("A cat wearing mittens.")], style: style, limit: 1) for try await image in images { print("Generated image: \(image)") } exit(0) } RunLoop.main.run()
0
0
322
Sep ’25
Threading issues when using debugger
Hi, I am modifying the sample camera app that is here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... In the processPreviewImages, I am using the Vision APIs to generate a segmentation mask for a person/object, then compositing that person onto a different background (with some other filtering). The filtering and compositing is done via CoreImage. At the end, I convert the CIImage to a CGImage then to a SwiftUI Image. When I run it on my iPhone, it works fine, and has not crashed. When I run it on the iPhone with the debugger, it crashes within a few seconds with: EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: It had previously been working fine with the debugger, so I'm not sure what has changed. Is there a difference in how the Vision APIs are executed if the debugger is attached vs. not?
1
0
385
Jan ’26
Apple on-device AI models
Hello, I am studying macOS26 Apple Intelligence features. I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession. It works fine but this model is not trained for programming or code completion. Xcode has an AI code completion feature. It is called "Predictive Code completion model". So, there are multiple on-device models on macOS26 ? Are there others ? Is there a way for me to send prompts to this "Predictive Code completion model" from my program ? Thanks
1
0
308
Oct ’25
Is there an API to check if a Core ML compiled model is already cached?
Hello Apple Developer Community, I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays. Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system? If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached. Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
0
0
150
May ’25
VNDetectTextRectanglesRequest not detecting text rectangles (includes image)
Hi everyone, I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code: guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in if let error = error { print("Text detection error: \(error)") return } guard let observations = request.results as? [VNTextObservation] else { print("No text rectangles detected.") return } print("Detected \(observations.count) text rectangles.") for observation in observations { print(observation.boundingBox) } } textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1 textDetectionRequest.reportCharacterBoxes = true let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:]) do { try handler.perform([textDetectionRequest]) } catch { print("Vision request error: \(error)") } The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with: I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision? Thanks for any help!
2
0
152
May ’25
Qwen3 VL CoreML
Looking for help with or to help with, due to the pending document enhancement, the Vibe Coders edition of cml editor. Also for more information on how to use the .mlkey whether or not my model is suppose to say IOs18 when I am planning to use it on Mac Apple Intelligence seems to think coreML is for iOS but are the capabilities extended when running NPU on the book? How to use this graph. coming in hot sorry. btw. there are 100s of feedback and crash reports sent in form me for additional info? I attached a image that might help with updating Tags
1
0
51
4h
SpeechAnalyzer / AssetInventory and preinstalled assets
During testing the “Bringing advanced speech-to-text capabilities to your app” sample app demonstrating the use of iOS 26 SpeechAnalyzer, I noticed that the language model for the English locale was presumably already downloaded. Upon checking the documentation of AssetInventory, I found out that indeed, the language model can be preinstalled on the system. Can someone from the dev team share more info about what assets are preinstalled by the system? For example, can we safely assume that the English language model will almost certainly be already preinstalled by the OS if the phone has the English locale?
1
0
255
Jul ’25