Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

AI-Powered Feed Customization via User-Defined Algorithm
Hey guys 👋 I’ve been thinking about a feature idea for iOS that could totally change the way we interact with apps like Twitter/X. Imagine if we could define our own recommendation algorithm, and have an AI on the iPhone that replaces the suggested tweets in the feed with ones that match our personal interests — based on public tweets, and without hacking anything. Kinda like a personalized "AI skin" over the app that curates content you actually care about. Feels like this would make content way more relevant and less algorithmically manipulative. Would love to know what you all think — and if Apple could pull this off 🔥
1
0
83
Jun ’25
ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
Just tried to write a very simple test of using foundation models, but it gave me the error like this "ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference establishment of session failed with Missing entitlement: com.apple.modelmanager.inference" The simple code is listed below: let session: LanguageModelSession = LanguageModelSession() let response = try? await session.respond(to: "What is the capital of France?") print("Response: (response)") So what's the problem of this one?
2
0
265
Jul ’25
CoreML: Model loading utilities
Hello, We find that models sometimes load very fast (<< 1 second) and sometimes encounter very long load times (>> 120 seconds). During such slow load times, the model is being compiled. We would greatly appreciate the ability to check cache validity via CoreML and determine that we are about to encounter long load times so that we can mitigate and provide a good user experience. A secondary issue: sometimes the cache is corrupted (typically .mpsgraphpackage yielding Metal cold asserts). This yields load failures and OS errors that persist between launches, and we have to manually nuke the cache (~/Library/..../my-app/...) for the CoreML assets. A CoreML API for clearing caches and hardening from asserts across the load paths would be appreciated
1
0
161
Jun ’25
Dynamically Create Tool Argument Type
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
1
0
476
Jul ’25
Unavailable error is wrong?
This is my code: witch SystemLanguageModel.default.availability { case .available: ContentView() .popover(isPresented: $showSettings) { SettingsView().presentationCompactAdaptation(.popover) } case .unavailable(.modelNotReady): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please come back later.")) case .unavailable(.appleIntelligenceNotEnabled): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please turn on Apple Intelligence.")) case .unavailable(.deviceNotEligible): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("This device is not eligible for Apple Intelligence.")) case .unavailable: ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark") } When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later." This seems to be wrong error?
1
0
289
Jul ’25
Can I give additional context to Foundation Models?
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions. Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens. That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
4
0
364
Jul ’25
Foundation Models performance reality check - anyone else finding it slow?
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds. I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes. Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
5
0
298
Jul ’25
Error when open mlpackage with XCode
Hello, I'm trying to write a model with PyTorch and convert it to CoreML. I wrote another models and that works succesfully, even the one that gave the problem is, but I can't visualize it with XCode to know where is running. The error that appear is: There was a problem decoding this Core ML document validator error: unable to open file for read Anyone knows why is this happening? Thanks a lot, Álvaro Corrochano
3
0
246
Apr ’25
Does ImageRequestHandler(data:) include depth data from AVCapturePhoto?
Hi all, I'm capturing a photo using AVCapturePhotoOutput, and I've set: let photoSettings = AVCapturePhotoSettings() photoSettings.isDepthDataDeliveryEnabled = true Then I create the handler like this: let data = photo.fileDataRepresentation() let handler = try ImageRequestHandler(data: data, orientation: .right) Now I’m wondering: If depth data delivery is enabled, is it actually included and used when I pass the Data to ImageRequestHandler? Or do I need to explicitly pass the depth data using the other initializer? let handler = try ImageRequestHandler( cvPixelBuffer: photo.pixelBuffer!, depthData: photo.depthData, orientation: .right ) In short: Does ImageRequestHandler(data:) make use of embedded depth info from AVCapturePhoto.fileDataRepresentation() — or is the pixel buffer + explicit depth data required? Thanks for any clarification!
1
0
278
Jul ’25
What is the Foundation Models support for basic math?
I am experimenting with Foundation Models in my time tracking app to analyze users tracked events, but I am finding that the model struggles with even basic computation of time. Specifically converting from seconds to hours and minutes. To give just one example, when I prompt: "Convert 3672 seconds to hours, minutes, and seconds. Don't include the calculations in the resulting output" I get this: "3672 seconds is equal to 1 hour, 0 minutes, and 36 seconds". Which is clearly wrong - it should be 1 hour, 1 minute, and 12 seconds. Another issue that I saw a lot is that seconds were considered to be minutes, or that the hours were just completely off. What can I do to make the support for math better? Or is that just something that the model is not meant to be used for?
1
0
217
Jun ’25
Compatibility issue of TensorFlow-metal with PyArrow
Overview I'm experiencing a critical issue where TensorFlow-metal and PyArrow seem to be incompatible when installed together in the same environment. Whenever both packages are present, TensorFlow crashes and the kernel dies during execution. Environment Details Environment Details macOS Version: 15.3.2 Mac Model: MacBook Pro Max M3 Python Version: 3.11 TensorFlow Version: 2.19 PyArrow Version: 19.0.0 Issue Description: When both TensorFlow-metal and PyArrow are installed in the same Python environment, any attempt to use TensorFlow results in immediate kernel crashes. The issue appears to be a compatibility problem between these two packages rather than a problem with either package individually. Steps to Reproduce Create a new Python environment: conda create -n tf-metal python=3.11 Install TensorFlow-metal: pip install tensorflow tensorflow-metal Install PyArrow: pip install pyarrow Run the following minimal example: # Create a simple model model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(2,)), tf.keras.layers.Dense(1) ]) model.compile(optimizer='adam', loss='mse') model.summary() # This works fine # Generate some dummy data X = np.random.random((100, 2)) y = np.random.random((100, 1)) # The crash happens exactly at this line model.fit(X, y, epochs=5, batch_size=32) # CRASH: Kernel dies here Result: Kernel crashes with no error message What I've Tried Reinstalling both packages in different orders Using different versions of both packages Creating isolated environments Checking system logs for additional error information The only workaround I've found is to use separate environments for each package, which isn't practical for my workflow as I need both libraries for my data processing and machine learning pipeline. Questions Has anyone else encountered this specific compatibility issue? Are there known workarounds that allow both packages to coexist? Is this a known issue that's being addressed in upcoming releases? Any insights, suggestions, or assistance would be greatly appreciated. I'm happy to provide any additional information that might help diagnose this problem. Thank you in advance for your help! Thank you in advance for your help!
2
0
135
May ’25
Getting FoundationsModel running in Simulator
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta. I can run code that uses the LLM in a #Preview { }. But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app. Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
3
0
383
Jul ’25
Selecting GPU for TensorFlow-Metal on Mac Pro (2013) with v0.8.0
Hi everyone, I'm a Mac enthusiast experimenting with tensorflow-metal on my Mac Pro (2013). My question is about GPU selection in tensorflow-metal (v0.8.0), which still supports Intel-based Macs, including my machine. I've noticed that when running TensorFlow with Metal, it automatically selects a GPU, regardless of what I specify using device indices like "gpu:0", "gpu:1", or "gpu:2". I'm wondering if there's a way to manually specify which GPU should be used via an environment variable or another method. For reference, I’ve tried the example from TensorFlow’s guide on multi-GPU selection: https://www.tensorflow.org/guide/gpu#using_a_single_gpu_on_a_multi-gpu_system My goal is to explore performance optimizations by using MirroredStrategy in TensorFlow to leverage multiple GPUs: https://www.tensorflow.org/guide/distributed_training#mirroredstrategy Interestingly, I discovered that the metalcompute Python library (https://pypi.org/project/metalcompute/) allows to utilize manually selected GPUs on my system, allowing for proper multi-GPU computations. This makes me wonder: Is there a hidden environment variable or setting that allows manual GPU selection in tensorflow-metal? Has anyone successfully used MirroredStrategy on multiple GPUs with tensorflow-metal? Would a bridge between metalcompute and tensorflow-metal be necessary for this use case, or is there a more direct approach? I’d love to hear if anyone else has experimented with this or has insights on getting finer control over GPU selection. Any thoughts or suggestions would be greatly appreciated! Thanks!
3
0
302
Mar ’25
Request for Agentic AI Mode (MCP Protocol) Support in Future Versions of iOS or Xcode
Hello Apple Team, Thank you for the recent Group Lab and for your continued work on advancing Xcode and developer tools. I’d like to submit a feature request: Are there any plans to introduce support for Agentic AI Mode (MCP protocol) in future versions of iOS or Xcode? As developer tools evolve toward more intelligent and context-aware environments, the integration of agentic AI capabilities could significantly enhance productivity and unlock new creative workflows. Looking forward to your consideration, and thank you again for the excellent session. Best regards
3
0
230
Jun ’25
Using Past Versions of Foundation Models As They Progress
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
7
0
402
Jul ’25