Where from and how does an NSRulerView get its magnification from? I am not using the automatic magnification by NSScrollView but using my own mechanism. How do I relay the zoom factor to NSRulerView?
Explore the various UI frameworks available for building app interfaces. Discuss the use cases for different frameworks, share best practices, and get help with specific framework-related questions.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The following WatchOs App example is very short, but already not functioning as it is expected, when using Digital Crown (full code):
import SwiftUI
struct ContentView: View {
let array = ["One","Two","Three","Four"]
@State var selection = "One"
var body: some View {
Picker("Array", selection: $selection) {
ForEach(array, id: \.self) {
Text($0)
}
}
}
}
The following 2 errors are thrown, when using Digital Crown for scrolling:
ScrollView contentOffset binding has been read; this will cause grossly inefficient view performance as the ScrollView's content will be updated whenever its contentOffset changes. Read the contentOffset binding in a view that is not parented between the creator of the binding and the ScrollView to avoid this.
Error: Error Domain=NSOSStatusErrorDomain Code=-536870187 "(null)"
Any help appreciated. Thanks a lot.
Hi, I'm experiencing the behaviour outlined below. When I navigate programmatically on iPadOS or macOS from a tab that hides the tab bar to another tab, the tab bar remains hidden. The real app has it's entry point in UIKit (i.e. it uses an UITabBarController instead of a SwiftUI TabView) but since the problem is reproducible with a SwiftUI only app, I used one for the sake of simplicity.
import SwiftUI
@main
struct HiddenTabBarTestApp: App {
@State private var selectedIndex = 0
var body: some Scene {
WindowGroup {
TabView(selection: $selectedIndex) {
Text("First Tab")
.tabItem {
Label("1", systemImage: "1.circle")
}
.tag(0)
NavigationStack {
Button("Go to first tab") {
selectedIndex = 0
}
.searchable(text: .constant(""))
}
.tabItem {
Label("2", systemImage: "2.circle")
}
.tag(1)
}
}
}
}
Reproduction:
Create a new SwiftUI App with the iOS App template and use the code from above
Run the app on iPadOS or macOS
Navigate to the second tab
Click into the search bar
Click the "Go to first tab" button
The tab bar is no longer visible
Is this a bug in the Framework or is it the expected behaviour? If it's the expected behaviour, do you have a good solution/workaround that doesn't require me to end the search programmatically (e.g. by using @Environment(\.dismissSearch)) before navigating to another tab? The goal would be to show the tab bar in the first tab while keeping the search open in the second tab.
The documentation here states that the saveItem command group placement contains the Save As as a default command, but it doesn't appear.
I have my document type specifying multiple 'writableContentTypes' - I expected this would enable the save as.
How do I do this?
Hey Everyone,
I can't see to ActiveLabel as it says there is no active module. Please help me.
Thanks,
Ben
import UIKit
import ActiveLabel
protocol TweetCellDelegate: AnyObject {
func handleProfileImageTapped(_ cell: TweetCell)
func handleReplyTapped(_ cell: TweetCell)
func handleLikeTapped(_ cell: TweetCell)
}
class TweetCell: UICollectionViewCell {
Hello.
I am currently building an app using AR Kit.
As for the UI, I am using SwiftUI and NavigationStack + NavigationLink for navigation and screen transitions!
Here I need to go back and forth between the AR screen and other screens.
If the number of screen transitions is small, this is not a problem.
However, if the number of screen transitions increases to 10 or 20, it crashes somewhere.
We are struggling with this problem. (The nature of the application requires multiple screen transitions.)
The crash log showed the following.
error: read memory from 0x1e387f2d4 failed
AR_Crash_Sample-2025-03-07-115914.txt
Incident Identifier: B23D806E-D578-4A95-8828-2A1E8D6BB7F8
Beta Identifier: 924A85AB-441C-41A7-9BC2-063940BDAF32
Hardware Model: iPhone16,1
Process: AR_Crash_Sample [2375]
Path: /private/var/containers/Bundle/Application/FAC3D662-DB10-434E-A006-79B9515D8B7A/AR_Crash_Sample.app/AR_Crash_Sample
Identifier: ar.crash.sample.AR.Crash.Sample
Version: 1.0 (1)
AppStoreTools: 16C7015
AppVariant: 1:iPhone16,1:18
Beta: YES
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: ar.crash.sample.AR.Crash.Sample [1464]
Date/Time: 2025-03-07 11:59:14.3691 +0900
Launch Time: 2025-03-07 11:57:47.3955 +0900
OS Version: iPhone OS 18.3.1 (22D72)
Release Type: User
Baseband Version: 2.40.05
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: AR_Crash_Sample [2375]
Triggered by Thread: 7
Application Specific Information:
abort() called
Thread 7 name: Dispatch queue: com.apple.arkit.depthtechnique
Thread 7 Crashed:
0 libsystem_kernel.dylib 0x1e387f2d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x21cedd59c pthread_kill + 268
2 libsystem_c.dylib 0x199f98b08 abort + 128
3 libc++abi.dylib 0x21ce035b8 abort_message + 132
4 libc++abi.dylib 0x21cdf1b90 demangling_terminate_handler() + 320
5 libobjc.A.dylib 0x18f6c72d4 _objc_terminate() + 172
6 libc++abi.dylib 0x21ce0287c std::__terminate(void (*)()) + 16
7 libc++abi.dylib 0x21ce02820 std::terminate() + 108
8 libdispatch.dylib 0x199edefbc _dispatch_client_callout + 40
9 libdispatch.dylib 0x199ee65cc _dispatch_lane_serial_drain + 768
10 libdispatch.dylib 0x199ee7158 _dispatch_lane_invoke + 432
11 libdispatch.dylib 0x199ee85c0 _dispatch_workloop_invoke + 1744
12 libdispatch.dylib 0x199ef238c _dispatch_root_queue_drain_deferred_wlh + 288
13 libdispatch.dylib 0x199ef1bd8 _dispatch_workloop_worker_thread + 540
14 libsystem_pthread.dylib 0x21ced8680 _pthread_wqthread + 288
15 libsystem_pthread.dylib 0x21ced6474 start_wqthread + 8
Perhaps I am using too much memory!
How can I address this phenomenon?
For the AR functionality, we are using UIViewRepresentable, which is written in UIKit and can be called from SwiftUI
import ARKit
import AsyncAlgorithms
import AVFoundation
import SCNLine
import SwiftUI
internal struct MeasureARViewContainer: UIViewRepresentable {
@Binding var tapCount: Int
@Binding var distance: Double?
@Binding var currentIndex: Int
var focusSquare: FocusSquare = FocusSquare()
let coachingOverlay: ARCoachingOverlayView = ARCoachingOverlayView()
func makeUIView(context: Context) -> ARSCNView {
let arView: ARSCNView = ARSCNView()
arView.delegate = context.coordinator
let configuration: ARWorldTrackingConfiguration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
configuration.frameSemantics = [.sceneDepth, .smoothedSceneDepth]
}
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
context.coordinator.sceneView = arView
context.coordinator.scanTarget()
coachingOverlay.session = arView.session
coachingOverlay.delegate = context.coordinator
coachingOverlay.goal = .horizontalPlane
coachingOverlay.activatesAutomatically = true
coachingOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
coachingOverlay.translatesAutoresizingMaskIntoConstraints = false
arView.addSubview(coachingOverlay)
return arView
}
func updateUIView(_ _: ARSCNView, context: Context) {
context.coordinator.mode = MeasurementMode(rawValue: currentIndex) ?? .width
if tapCount == 0 {
context.coordinator.resetMeasurement()
return
}
if distance != nil {
return
}
DispatchQueue.main.async {
if context.coordinator.distance == nil {
context.coordinator.handleTap()
}
}
}
static func dismantleUIView(_ uiView: ARSCNView, coordinator: Coordinator) {
uiView.session.pause()
coordinator.stopScanTarget()
coordinator.stopSpeech()
DispatchQueue.main.async {
uiView.removeFromSuperview()
}
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate, ARCoachingOverlayViewDelegate {
var parent: MeasureARViewContainer
var sceneView: ARSCNView?
var startPosition: SCNVector3?
var pointedCount: Int = 0
var distance: Float?
var mode: MeasurementMode = .width
let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer()
var scanTargetTask: Task<Void, Never>?
var currentResult: ARRaycastResult?
init(_ parent: MeasureARViewContainer) {
self.parent = parent
}
// ... etc
}
}
We rotate the portrait to landscape then we force rotate to portrait app rotates but the screen seems to be divided into two parts
My sample code
class OrientationHelper {
static func lockOrientation(_ orientation: UIInterfaceOrientationMask) {
if let delegate = UIApplication.shared.delegate as? AppDelegate {
delegate.orientationLock = orientation
}
}
static func lockOrientation(_ orientation: UIInterfaceOrientationMask, andRotateTo rotateOrientation: UIInterfaceOrientation) {
self.lockOrientation(orientation)
DispatchQueue.main.async {
UIDevice.current.setValue(rotateOrientation.rawValue, forKey: "orientation")
UIViewController.attemptRotationToDeviceOrientation()
}
}
}
// Appdelegte
var orientationLock = UIInterfaceOrientationMask.portrait
func application(_ application: UIApplication, supportedInterfaceOrientationsFor window: UIWindow?) -> UIInterfaceOrientationMask {
return orientationLock
}
Topic:
UI Frameworks
SubTopic:
UIKit
Hello!
We are writing as we recently have started receiving complaints that the app clips QR codes aren't working well. Users scanning QR codes will be taken to the website, where they don't even get the app clip banner.
It seems to be a recent change, as we started getting complaints about it. I have also noticed this on IOS 17 and IOS 18 devices.
Our application with this app clip has been live for over a year and we haven't had issues before.
Example 1: The user scans an app clip QR code, but sees url and opening just loads the website. It doesn't show even the app clip banner.
After moving back to the camera app and rescanning the QR code it shows the app clip correctly. Also now going back to the website shows the banner popup.
Example 2: I scan app clip QR code. It shows just the website url. I scan another app clip QR code it loads that correctly. Now when I scan the first app clip QR code again it shows it also correctly now.
Example 3: I scan app clip QR code. It shows website url instead of app clip. When I open IOS control center and close it then the camera app refreshes and it shows the app clip button instead the web link.
I have videos of all of these cases that I could upload somewhere if needed. As it seems some links are not allowed here.
Let me know if you need any more input.
Hello everyone!
I'm developing an app with this amazing RoomPlan API, so far everything works great, however I want to render a customized viewer and I don't want draw default 3D geometry, the line shapes are very helpful so I want to keep them. Is there any way to disable the 3D geometry shown on the viewr? I could just render another viewer ontop of the room capture viewer however I want to save as much resources as possible.
Hello,
I struggle to do some UI testing using accessibility identifiers when I wrap some SwiftUI view using UIHostingController (our app is mainly coded using UIKit).
Considering this SwiftUI view (simplified for this post):
HStack {
Text(self.title.uppercased())
.albusTheme(.header)
.lineLimit(self.isMultiline ? nil : 1)
.multilineTextAlignment(.leading)
.accessibilityAddTraits(.isStaticText)
.accessibilityIdentifier("section_title")
}
This view and its controller are embedded as a UITableViewHeaderFooterView in a UITableView.
This is an extract of recursiveDescription output:
| | | | | | <_UITableViewHeaderFooterContentView: 0x1076ad720; frame = (0 0; 393 40); layer = <CALayer: 0x6000006b1720>>
| | | | | | | <_TtGC13ListComponent19SwiftUIFieldContentV20ListComponentLibrary17FormSectionHeader_: 0x1076ab980; baseClass = UIControl; frame = (0 0; 393 40); layer = <CALayer: 0x6000006b1da0>>
| | | | | | | | <_TtGC7SwiftUI14_UIHostingViewV20ListComponentLibrary17FormSectionHeader_: 0x1078f9600; frame = (0 0; 393 40); gestureRecognizers = <NSArray: 0x600000e25d70>; backgroundColor = UIExtendedSRGBColorSpace 0.0666667 0.133333 0.227451 1; layer = <SwiftUI.UIHostingViewDebugLayer: 0x6000006b19a0>>
| | | | | | | | | <_TtCOCV7SwiftUI11DisplayList11ViewUpdater8Platform13CGDrawingView: 0x106985550; frame = (16 12.6667; 147.667 14.6667); anchorPoint = (0, 0); opaque = NO; autoresizesSubviews = NO; layer = <_TtCOCV7SwiftUI11DisplayList11ViewUpdater8PlatformP33_65A81BD07F0108B0485D2E15DE104A7514CGDrawingLayer: 0x6000026b8240>>
CGDrawingView seems to hide the underlying view hierarchy. Is there a way to access accessibility settings using the integration of SwiftUI in UIKit?
I have followed this video on implementing a custom view for the watchOS 11 Smart Stack Live Activities. However, the UI of my iOS app keeps showing up on the watchOS.
`struct widgetLiveActivity: Widget {
@Environment(.activityFamily) var activityFamily
var body: some WidgetConfiguration {
ActivityConfiguration(for: widgetAttributes.self) { context in
switch activityFamily {
case .small, _:
Text("WatchOS UI")
case .medium:
Text("iOS UI")
.activitySystemActionForegroundColor(Color.black)
} dynamicIsland: { context in ... }
.supplementalActivityFamilies([.small, .medium])
}
}`
Hi all, I am looking for a futureproof way of getting the Screen Resolution of my display device using SwiftUI in MacOS. I understand that it can't really be done to the fullest extent, meaning that the closest API we have is the GeometeryProxy and that would only result in the resolution of the parent view, which in the MacOS case would not give us the display's screen resolution. The only viable option I am left with is NSScreen.frame.
However, my issue here is that it seems like Apple is moving towards SwiftUI aggressively, and in order to futureproof my application I need to not rely on AppKit methods as much. Hence, my question: Is there a way to get the Screen Resolution of a Display using SwiftUI that Apple itself recommends? If not, then can I rely safely on NSScreen's frame API?
I am developing a macOS app using SwiftUI, and I am encountering an issue when launching the app at login. The app starts as expected, but the window does not appear automatically. Instead, it remains in the Dock, and the user must manually click the app icon to make the window appear.
Additionally, I noticed that the timestamp obtained during the app's initialization (init) differs from the timestamp obtained in .onAppear. This suggests that .onAppear does not trigger until the user interacts with the app. However, I want .onAppear to execute automatically upon login.
Steps to Reproduce
Build the app and add it to System Settings > General > Login Items as an item that opens at login.
Quit the app and restart the Mac.
Log in to macOS.
Observe that the app starts and appears in the Dock but does not create a window.
Click the app icon in the Dock, and only then does the window appear.
Expected Behavior
The window should be created and appear automatically upon login without requiring user interaction.
.onAppear should execute immediately when the app starts at login.
Observed Behavior
The app launches and is present in the Dock, but the window does not appear.
.onAppear does not execute until the user manually clicks the app icon.
A discrepancy exists between the timestamps obtained in init and .onAppear.
Sample Code
Here is a minimal example that reproduces the issue:
LoginTestApp.swift
import SwiftUI
@main
struct LoginTestApp: App {
@State var date2: Date
init(){
date2 = Date()
}
var body: some Scene {
WindowGroup {
MainView(date2: $date2)
}
}
}
MainView.swift
import SwiftUI
struct MainView: View {
@State var date1: Date?
@Binding var date2: Date
var body: some View {
Text("This is MainView")
Text("MainView created: \(date1?.description ?? "")")
.onAppear {
date1 = Date()
}
Text("App initialized: \(date2.description)")
}
}
Test Environment
Book Pro 13-inch, M1, 2020
macOS Sequoia 15.2
Xcode 16.2
Questions
Is this expected behavior in macOS Sequoia 15.2?
How can I ensure that .onAppear executes automatically upon login?
Is there an alternative approach to ensure the window is displayed without user interaction?
Why is there no option as a CarPlay developer to enable the creation of an App to track and enter your car's maintenance records? I know the pat reply would be Apple doesn't want you to do this while car is in motion. But I would normally do this while parked at the dealership or other service provider no?
Hello.
I have created the UIComponent using SwiftUI TextField.
struct SearchBar: View {
@Binding private var text: String
var body: some View {
HStack {
TextField("", text: $text, prompt: Text("Search"))
.textFieldStyle(.plain)
.padding()
.foregroundStyle(.white)
Button {
text = ""
} label: {
Image(systemName: "xmark")
.foregroundStyle(.black)
}
}
.padding(.horizontal, 8)
.background(RoundedRectangle(cornerRadius: 8).fill(.gray))
.padding(.horizontal, 8)
}
init(text: Binding<String>) {
_text = text
}
}
struct ParentView: View {
@State var text = ""
var body: some View {
SearchBar(text: $text)
}
}
A text of type String is binded to the component and the text is cleared when the xmark button on the right is pressed.
However, the Japanese text is not cleared under certain conditions.
Type in Hiragana, press xmark without pressing “confirm” on the keyboard → can be cleared.
Type Hiragana, press “confirm” on the keyboard, and then press xmark→Cannot be cleared.
Convert Japanese text to Kanji characters → can be cleared
Does anyone know of a workaround?
The Xcode I used is 16.0.
I'm working on a proof of concept, and the goal I have is: To have an array of coordinates, and animate differently to each one.
Here is the following code I have:
.mapCameraKeyframeAnimator(trigger: startSequence) { camera in
KeyframeTrack(\MapCamera.centerCoordinate) {
for pin in viewModel.mapPins {
if pin == viewModel.mapPins.first {
CubicKeyframe(pin.coordinates, duration: 3)
} else {
CubicKeyframe(pin.coordinates, duration: 3)
}
}
}
KeyframeTrack(\MapCamera.heading) {
for pin in viewModel.mapPins {
if pin == viewModel.mapPins.first {
LinearKeyframe(camera.heading, duration: 1.5)
CubicKeyframe(camera.heading + 45, duration: 4)
} else {
LinearKeyframe(camera.heading, duration: 1.5)
CubicKeyframe(camera.heading + 45, duration: 4)
}
}
}
}
Now on the .heading keyframe, it throws the error: Underlying type for opaque result type 'some KeyframeTrackContent' could not be inferred from return expression. This only happens inside a for-loop and when there are multiple frames.
Figured it needs to return a KeyframeTrackContent object but it 1. doesn't have an initializer and 2. Using the builder static func throws an unknown error bug. If anyone is into mapkit and could help me out, much would be appreciated!
Edit: Please note that using Group, for-each, or any other type of view does not work as it needs to return KeyframeTrackContent
When a parent view is selected for the detail pane of a NavigationSplitView subviews appear as expected but not with the dimensions set by .frame on the subview.
Toggling the flag works as expected, appearing the subview with the idealWidth. I persist the flag in a SwiftData @Model class so that on restart and first appearance of the parent view the Right View subview presence is as it was left. The problem is that the .frame size is ignored, apparently. No manner of programatic view refresh seems to trigger a resize to the preferred values, only toggling the flag.
Is there a better way to handle a collapsing subview in an HSplitView? Why is the .frame not respected?
In this example I've added the else clause so HSplitView always has two views with .frame settings but the result is the same without it.
VStack {
HSplitView {
VStack {
Text("left view")
}
.frame(
minWidth: 100,
idealWidth: .infinity,
maxWidth: .infinity,
maxHeight: .infinity
)
if documentSettings.nwIsPieChartShowing {
VStack {
Text("right view")
}
.frame(
minWidth: 100,
idealWidth: 200,
maxWidth: .infinity,
maxHeight: .infinity
)
}
else {
Text("")
.frame(
minWidth: 0,
idealWidth: 0,
maxWidth: 0,
maxHeight: .infinity
)
}
}
HStack {
Button("Right View",
systemImage: { documentSettings.nwIsPieChartShowing ? "chart.pie.fill" : "chart.pie"}(),
action: { documentSettings.nwIsPieChartShowing.toggle() }
)
}
}
}
}
MacOS Sequoia 15.3.1, Xcode 16.2
how can i watch the LiveCommunicationKit event?
i have codes likes this:
import UIKit
import LiveCommunicationKit
@available(iOS 17.4, *)
class LiveCallKit: NSObject, ConversationManagerDelegate {
@available(iOS 17.4, *)
func conversationManager(_ manager: ConversationManager, conversationChanged conversation: Conversation) {
}
@available(iOS 17.4, *)
func conversationManagerDidBegin(_ manager: ConversationManager) {
}
@available(iOS 17.4, *)
func conversationManagerDidReset(_ manager: ConversationManager) {
}
@available(iOS 17.4, *)
func conversationManager(_ manager: ConversationManager, perform action: ConversationAction) {
switch action.state
{
case .idle:
self.completionHandler!(InterfaceKind.reject,self.payload!)
case .running:
self.completionHandler!(InterfaceKind.reject,self.payload!)
case .complete:
self.completionHandler!(InterfaceKind.reject,self.payload!)
case .failed(let reason):
self.completionHandler!(InterfaceKind.reject,self.payload!)
default:
self.completionHandler!(InterfaceKind.reject,self.payload!)
}
}
@available(iOS 17.4, *)
func conversationManager(_ manager: ConversationManager, timedOutPerforming action: ConversationAction) {
}
@available(iOS 17.4, *)
func conversationManager(_ manager: ConversationManager, didActivate audioSession: AVAudioSession) {
}
@available(iOS 17.4, *)
func conversationManager(_ manager: ConversationManager, didDeactivate audioSession: AVAudioSession) {
}
@objc public enum InterfaceKind : Int, Sendable, Codable, Hashable {
/// 拒绝/挂断
case reject
/// 接听.
case answer
}
var sessoin: ConversationManager
var callId: UUID
var completionHandler: ((_ actionType: InterfaceKind,_ payload: [AnyHashable : Any]) -> Void)?
var payload: [AnyHashable : Any]?
@objc init(icon: UIImage!) {
let data:Data = icon.pngData()!;
let cfg: ConversationManager.Configuration = ConversationManager.Configuration(ringtoneName: "ring.mp3",
iconTemplateImageData: data,
maximumConversationGroups: 1,
maximumConversationsPerConversationGroup: 1,
includesConversationInRecents: false,
supportsVideo: false,
supportedHandleTypes: Set([Handle.Kind.generic]))
self.sessoin = ConversationManager(configuration: cfg)
self.callId = UUID()
super.init()
self.sessoin.delegate = self
}
@objc func toIncoming(_ payload: [AnyHashable : Any], displayName: String,actBlock: @escaping(_ actionType: InterfaceKind,_ payload: [AnyHashable : Any])->Void) async {
self.completionHandler = actBlock
do {
self.payload = payload
self.callId = UUID()
var update = Conversation.Update(members: [Handle(type: .generic, value: displayName, displayName: displayName)])
let actNumber = Handle(type: .generic, value: displayName, displayName: displayName)
update.activeRemoteMembers = Set([actNumber])
update.localMember = Handle(type: .generic, value: displayName, displayName: displayName);
update.capabilities = [ .playingTones ];
try await self.sessoin.reportNewIncomingConversation(uuid: self.callId, update: update)
try await Task.sleep(nanoseconds: 2000000000);
} catch {
}
}
}
i want to watch the buttons action,how should i do?
Topic:
UI Frameworks
SubTopic:
SwiftUI
In a UIKit application, removing a view from the hierarchy is straightforward—we simply call myView.removeFromSuperview(). This not only removes myView from the UI but also deallocates any associated memory.
Now that I'm transitioning to SwiftUI, I'm struggling to understand the recommended way to remove a view from the hierarchy, given SwiftUI's declarative nature.
I understand that in SwiftUI, we declare everything that should be displayed. However, once a view is rendered, what is the correct way to remove it? Should all UI elements be conditionally controlled to determine whether they appear or not?
Below is an example of how I’m currently handling this, but it doesn’t feel like the right approach for dynamically removing a view at runtime.
Can someone guide me on the best way to remove views in SwiftUI?
struct ContentView: View {
@State private var isVisible = true
var body: some View {
VStack {
if isVisible { // set this to false to remove TextView?
Text("Hello, SwiftUI!")
.padding()
}
Button("Toggle View") {
...
}
}
}
}
struct ContentView: View {
var body: some View {
ScrollView(.vertical) {
LazyVStack(spacing: 0) {
ForEach(0..<10000) { index in
// If VStack remove, memory issue occur
// VStack {
CustomView(index: index)
// }
}
}
}
}
}
struct CustomView: View {
var index: Int
var body: some View {
VStack {
Text("\(index)")
}
}
}
I wrapped it into a shorter and simpler version, but it still works.
At first, I struggled to figure out why the initial code was causing lag. After investigating with the Debug Memory Graph, I found that the generated custom view’s memory was not being released properly.
This seemed strange because I was using the custom view inside a LazyHStack.
So, I tried various approaches to resolve the issue.
In the Debug Memory Graph, I started suspecting that SwiftUI’s built-in views like VStack and HStack might be affecting memory management. To test this, I wrapped my custom view inside a VStack, and the memory issue disappeared.
However, I want to understand why I need to include the custom view inside a VStack for proper memory management.
(I simplified this code by wrapping it into a shorter version. However, in a real project, the custom view is more complex, and the data list contains more than 10,000 items. This caused severe lag.)
xcode: 16.2, iOS 18, iOS 16