I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package.
When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening.
I have created a super simplified example of my setup so it's easier to understand.
First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input)
public typealias ConversationContext = XMLDocument
extension ConversationContext {
public func toPlainText() -> String {
return xmlString(options: [.nodePrettyPrint])
}
}
/// Represents a reasoning item in a conversation, which includes a title and reasoning content.
/// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation.
@Generable(description: "A reasoning item in a conversation, containing content and a title.")
struct ConversationReasoningItem: ConversationItem {
@Guide(description: "The content of the reasoning item, which is your thinking process or explanation")
public var reasoningContent: String
@Guide(description: "A short summary of the reasoning content, digestible in an interface.")
public var title: String
@Guide(description: "Indicates whether reasoning is complete")
public var done: Bool
}
extension ConversationReasoningItem: ConversationContextProvider {
public func toContext() -> ConversationContext {
// <ReasoningItem title="${title}">
// ${reasoningContent}
// </ReasoningItem>
let root = XMLElement(name: "ReasoningItem")
root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode)
root.stringValue = reasoningContent
return ConversationContext(rootElement: root)
}
}
Then there is the generator, which creates a reasoning item from a user query and previously generated items:
struct ReasoningItemGenerator {
var instructions: String {
"""
<omitted for brevity>
"""
}
func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> {
let session = LanguageModelSession(instructions: instructions)
// build the context for the reasoning item out of the user's query and the previous reasoning items
let userQuery = "User's query: \(input.0)"
let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n")
let context = userQuery + "\n" + reasoningItemsText
let reasoningItemResponse = try await session.streamResponse(
to: context, generating: ConversationReasoningItem.self)
return reasoningItemResponse
}
}
I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns.
Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively.
I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m currently exploring the use of Foundation models on Apple platforms to build a chatbot-style assistant within an app. While the integration part is straightforward using the new FoundationModel APIs, I’m trying to figure out how to control the assistant’s responses more tightly — particularly:
Ensuring the assistant adheres to a specific tone, context, or domain (e.g. hospitality, healthcare, etc.)
Preventing hallucinations or unrelated outputs
Constraining responses based on app-specific rules, structured data, or recent interactions
I’ve experimented with prompt, systemMessage, and few-shot examples to steer outputs, but even with carefully generated prompts, the model occasionally produces incorrect or out-of-scope responses.
Additionally, when using multiple tools, I'm unsure how best to structure the setup so the model can select the correct pathway/tool and respond appropriately. Is there a recommended approach to guiding the model's decision-making when several tools or structured contexts are involved?
Looking forward to hearing your thoughts or being pointed toward related WWDC sessions, Apple docs, or sample projects.
I'm running MacOs 26 Beta 5. I noticed that I can no longer achieve 100% usage on the ANE as I could before with Apple Foundations on-device model. Has Apple activated some kind of throttling or power limiting of the ANE? I cannot get above 3w or 40% usage now since upgrading. I'm on the high power energy mode. I there an API rate limit being applied?
I kave a M4 Pro mini with 64 GB of memory.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
We are really excited to have introduced the Foundation Models framework in WWDC25. When using the framework, you might have feedback about how it can better fit your use cases.
Starting in macOS/iOS 26 Beta 4, the best way to provide feedback is to use #Playground in Xcode. To do so:
In Xcode, create a playground using #Playground. Fore more information, see Running code snippets using the playground macro.
Reproduce the issue by setting up a session and generating a response with your prompt.
In the canvas on the right, click the thumbs-up icon to the right of the response.
Follow the instructions on the pop-up window and submit your feedback by clicking Share with Apple.
Another way to provide your feedback is to file a feedback report with relevant details. Specific to the Foundation Models framework, it’s super important to add the following information in your report:
Language model feedback
This feedback contains the session transcript, including the instructions, the prompts, the responses, etc. Without that, we can’t reason the model’s behavior, and hence can hardly take any action.
Use logFeedbackAttachment(sentiment:issues:desiredOutput: ) to retrieve the feedback data of your current model session, as shown in the usage example, write the data into a file, and then attach the file to your feedback report.
If you believe what you’d report is related to the system configuration, please capture a sysdiagnose and attach it to your feedback report as well.
The framework is still new. Your actionable feedback helps us evolve the framework quickly, and we appreciate that.
Thanks,
The Foundation Models framework team
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In this online session, you can code along with us as we build generative AI features into a sample app live in Xcode. We'll guide you through implementing core features like basic text generation, as well as advanced topics like guided generation for structured data output, streaming responses for dynamic UI updates, and tool calling to retrieve data or take an action.
Check out these resources to get started:
Download the project files: https://developer.apple.com/events/re...
Explore the code along guide: https://developer.apple.com/events/re...
Join the live Q&A: https://developer.apple.com/videos/pl...
Agenda – All times PDT
10 a.m.: Welcome and Xcode setup
10:15 a.m.: Framework basics, guided generation, and building prompts
11 a.m.: Break
11:10 a.m.: UI streaming, tool calling, and performance optimization
11:50 a.m.: Wrap up
All are welcome to attend the session. To actively code along, you'll need a Mac with Apple silicon that supports Apple Intelligence running the latest release of macOS Tahoe 26 and Xcode 26.
If you have questions after the code along concludes please share a post here in the forums and engage with the community.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Recursive and Self-Referential Data Structures
Combining recursive and self-referential data structures with frameworks like Accelerate, SwiftMacros, and utilizing SwiftUI hooks can offer significant benefits in terms of performance, maintainability, and expressiveness. Here is how Apple Intelligence breaks it down.
Benefits:
Natural Representation of Complex Data:
Recursive structures, such as trees and graphs, are ideal for representing hierarchical or interconnected data, like file systems, social networks, and DOM trees.
Simplified Algorithms:
Many algorithms, such as traversals, sorting, and searching, are more straightforward and elegant when implemented using recursion.
Dynamic Memory Management:
Self-referential structures can dynamically grow and shrink, making them suitable for applications with unpredictable data sizes.
Challenges:
Performance Overhead:
Recursive algorithms can lead to stack overflow if not properly optimized (e.g., using tail recursion).
Self-referential structures can introduce memory management challenges, such as retain cycles.
Accelerate Framework
Benefits:
High-Performance Computation:
Accelerate provides optimized libraries for numerical and scientific computing, including linear algebra, FFT, and image processing.
It can significantly speed up computations, especially for large datasets, by leveraging multi-core processors and GPU acceleration.
Parallel Processing:
Accelerate automatically parallelizes operations, making it easier to take advantage of modern hardware capabilities.
Integration with Recursive Data:
Matrix and Vector Operations:
Use Accelerate for operations on matrices and vectors, which are common in recursive algorithms like those used in machine learning and physics simulations.
FFT and Convolutions:
Accelerate's FFT functions can be used in recursive algorithms for signal processing and image analysis.
SwiftMacros
Benefits:
Code Generation and Transformation:
SwiftMacros allow you to generate and transform code at compile time, enabling the creation of DSLs, boilerplate reduction, and optimization.
Improved Compile-Time Checks:
Macros can perform complex compile-time checks, ensuring code correctness and reducing runtime errors.
Integration with Recursive Data:
DSL for Data Structures:
Create a DSL using SwiftMacros to define recursive data structures concisely and safely.
Optimization:
Use macros to generate optimized code for recursive algorithms, such as memoization or iterative transformations.
SwiftUI Hooks
Benefits:
State Management:
Hooks like @State, @Binding, and @Effect simplify state management in SwiftUI, making it easier to handle dynamic data.
Side Effects:
@Effect allows you to perform side effects in a declarative manner, integrating seamlessly with asynchronous operations.
Reusable Logic:
Custom hooks enable the reuse of stateful logic across multiple views, promoting code maintainability.
Integration with Recursive Data:
Dynamic Data Binding:
Use SwiftUI's data binding to manage the state of recursive data structures, ensuring that UI updates reflect changes in the underlying data.
Efficient Rendering:
SwiftUI's diffing algorithm efficiently updates the UI only for the parts of the recursive structure that have changed, improving performance.
Asynchronous Data Loading:
Combine @Effect with recursive data structures to fetch and process data asynchronously, such as loading a tree structure from a remote server.
Example: Combining All Components
Imagine you're building an app that visualizes a hierarchical file system using a recursive tree structure. Here's how you might combine these components:
Define the Recursive Data Structure:
Use SwiftMacros to create a DSL for defining tree nodes.
@macro
struct TreeNode {
var value: T
var children: [TreeNode]
}
Optimize with Accelerate:
Use Accelerate for operations like computing the size of the tree or performing transformations on node values.
func computeTreeSize(_ node: TreeNode) -> Int {
return node.children.reduce(1) { $0 + computeTreeSize($1) }
}
Manage State with SwiftUI Hooks:
Use SwiftUI hooks to load and display the tree structure dynamically.
struct FileSystemView: View {
@State private var rootNode: TreeNode = loadTree()
var body: some View {
TreeView(node: rootNode)
}
private func loadTree() -> TreeNode<String> {
// Load or generate the tree structure
}
}
struct TreeView: View {
let node: TreeNode
var body: some View {
List(node.children, id: \.value) {
Text($0.value)
TreeView(node: $0)
}
}
}
Perform Side Effects with @Effect:
Use @Effect to fetch data asynchronously and update the tree structure.
struct FileSystemView: View {
@State private var rootNode: TreeNode = TreeNode(value: "/")
@Effect private var loadTreeEffect: () -> Void = {
// Fetch data from a server or database
}
var body: some View {
TreeView(node: rootNode)
.onAppear { loadTreeEffect() }
}
}
By combining recursive data structures with Accelerate, SwiftMacros, and SwiftUI hooks, you can create powerful, efficient, and maintainable applications that handle complex data with ease.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models