Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
In my previous blog post, I dove into Apple’s Augmented Reality platform to build a foundation for understanding Fiori for iOS ARKit. Let's apply this new knowledge and use FioriARKit.

The first product offered in the FioriARKit Swift Package is AR Annotations. From the documentation we find,

“Annotations refer to Cards that match with a corresponding Marker located relative to an image or object in the real world. To view annotations in the world view, the user scans the image / object with the AR Scanner.”


FioriARKit Examples App Demo


This is a great high-level description, but how do we load the Card data and the positions that back these Markers?

Strategy Pattern


For FioriARKit to render an annotation scene it needs various pieces of data that can each be in different forms. The positions that back the markers could be from a usdz or reality file. The card data could come from a model struct or a json file.

Annotation Loading seems like a great opportunity for the strategy pattern with composition. Using various strategies that follow a contract we can have different implementations that achieve the same result. Where this data comes from and how its arranged together is the responsibility of a concrete type that conforms to AnnotationLoadingStrategy. This protocol requires one throwable method that returns a list of ScreenAnnotations. ScreenAnnotations are a model that serves as single source of truth for an annotation, such as the card data and the real world anchoring position that backs the Marker’s position.
// I'll discuss the significance of the CardItem associatedtype in another post.
// For now what's important is conforming to the load method
public protocol AnnotationLoadingStrategy {
associatedtype CardItem: CardItemModel
var cardContents: [CardItem] { get }
func load(with manager: ARManager) throws -> [ScreenAnnotation<CardItem>]
}

The concrete strategy is then passed into the ARAnnotationViewModel’s load method. The ViewModel’s load method doesn’t care where the annotations come from. It just knows the strategy its been given conforms to AnnotationLoadingStrategy which is required to implement a method that ‘trys’ to returns a ScreenAnnotation list.
let strategy = RCProjectStrategy(cardContents: cardItems, anchorImage: anchorImage, physicalWidth: 0.1, rcFile: "MonitorRC", rcScene: "MonitorScene")
arModel.load(loadingStrategy: strategy)

Currently FioriARKit has built in strategies for rcproject, reality, and usdz files.

Reality Composer


Reality Composer is a tool to simplify the complex process of scene creation. We compose content visually around a chosen anchor. Without it we would have to measure the distances from our anchor and input them as (x, y, z) coordinates for each annotation. We’d also have to consider the orientation of the world and the anchors.

While Reality Composer has many great features, what we are interested in is simple placement of where the markers are relative to the image or object anchors. A sphere is an arbitrary primitive shape that can represent this. We can arrange the spheres and then preview them in AR with an iOS device to fine tune their positions.

Note: The spheres will not be visible in the final scene

After the scene is created, we can export it as a scene into a Reality file, USDZ file or just use the entire rcproject file. These files can be simply thought of as a list of positions in 3D space. We didn’t have to measure anything!

FioriARKit supports both image and object anchors. Object anchors are less intuitive.

Let’s create the two scenes with different anchors, card data, and strategies.

Image Anchor, RCProjectStrategy, and CardItemModel


An rcproject file have a limitation where it needs to be in the project at compile time, but it's useful for quick prototyping.

Let’s create the scene first. I want to Annotate my two monitors and my keyboard.

  1. Create a reality composer project in the iOS app and let’s name it MonitorRC.rcproject

  2. Choose the Image Anchor in the entity drawer and let’s name the scene MonitorScene

  3. Upload the reference image and set its real world physical dimensions

  4. Choose 3 spheres and we can preview the scene and position them until satisfied

  5. The name of the spheres will represent their unique id. I’ll name mine keyboard, leftMonitor, and rightMonitor

  6. Long press to share the .rcproject file from the projects menu in Reality Composer

  7. Drag it into your Xcode project navigator


The entity drawer is where you can configure your entities and the anchor. After tapping on a sphere the drawer will change. Here you can change the name of the spheres.

 


 

Set up Image Anchor Scene  /   Preview in AR and Fine Tune



CardItemModel


We need to store the data that will be displayed in the cards. We could do this with JSON, but let’s use a model and fortunately FioriARKit has the protocol CardItemModel for this. Let's create a struct that conforms to it:
public struct StringIdentifyingCardItem: CardItemModel {
public var id: String
public var title_: String
public var descriptionText_: String?
public var detailImage_: Image?
public var actionText_: String?
public var icon_: Image?

public init(id: String, title_: String, descriptionText_: String? = nil, detailImage_: Image? = nil, actionText_: String? = nil, icon_: Image? = nil) {
self.id = id
self.title_ = title_
self.descriptionText_ = descriptionText_
self.detailImage_ = detailImage_
self.actionText_ = actionText_
self.icon_ = icon_
}
}

 

Creating the ContentView


We have our card data, project name/scene name, and the image that backs the anchor. The RCProjectStrategy accepts this data and then passed into the ViewModels load method. The SingleImageARCardView is the SwiftUI View that displays the entire experience using the ARAnnotationViewModel. Notice that the id's in the cardItems matches the names of the spheres we created in Reality Composer.

Note: The image passed into the SingleImageARCardView is what's displayed on the AR Scanning View. The image also needs to get passed into for FioriARKit to configure it as a detectable image along with its physical width.
struct BlogPostContentView: View {
@StateObject var arModel = ARAnnotationViewModel<StringIdentifyingCardItem>()

var body: some View {
SingleImageARCardView(arModel: arModel,
image: Image("qrImage"),
cardAction: { id in
// set the card action for id corresponding to the CardItemModel
print(id)
})
.onAppear(perform: loadInitialData)
}

func loadInitialData() {
let cardItems = [
StringIdentifyingCardItem(id: "keyboard", title_: "MBP Keyboard"),
StringIdentifyingCardItem(id: "leftMonitor", title_: "MBP Monitor", icon_: Image(systemName: "display")),
StringIdentifyingCardItem(id: "rightMonitor", title_: "External Monitor", icon_: Image(systemName: "display.2"))
]
guard let anchorImage = UIImage(named: "qrImage") else { return }
let strategy = RCProjectStrategy(cardContents: cardItems, anchorImage: anchorImage, physicalWidth: 0.1, rcFile: "MonitorRC", rcScene: "MonitorScene")
arModel.load(loadingStrategy: strategy)
}
}


Monitor Scene App



Object Anchor, RealityFileStrategy, and JSON


What even is an object anchor?

ARKit is always mapping a point cloud of the world during World Tracking. Using Reality Composer, we can have it ‘scan’ an object to create a point cloud representation of it. Under the hood, this point cloud is saved as an arobject file. An AR session can then be configured to detect a match for this object’s point cloud in the scene. We can now anchor virtual content around the object in the physical world.

Apple’s reality file supports Object Anchors, but usdz does not. So, we will use the RealityFileStrategy.

Like before we will use Reality Composer to compose a scene:

  1. Open Reality Composer on an iOS device and choose object anchor

  2. This time only naming the scene matters, PS4Scene

  3. Tap the icon which is a cube in a gear on the top bar.

  4. There will be an option to choose an AR Object Asset and from here select Scan, I’ll use a PS4 Controller

  5. After composing the scene, we will again give the spheres unique id names... Dune, Sunglasses, and Keys

  6. On the top bar the 3 dots will open a menu where you can choose to export

  7. Export the Current Scene as a reality file


For the sake of this tutorial, you can drag the reality file into the Xcode project navigator. We’ll use Bundle.main.url to access the url, yet in practice the url could be a path to the apps documents directory or fetched from outside of the app.


 

Scan with Reality Composer iOS app / Compose spheres around Object



JSON Data


This time our data will be in JSON format. Fortunately, our strategies each come with a Constructor for passing in card data as Data from a JSON array. Behind the scenes this is decoded into a built in CardItemModel that conforms to Decodable.
[
{
"id": "Dune",
"title_": "Dune Book",
"descriptionText_": "Price: 10.99",
"detailImage_": null,
"actionText_": "Order",
"icon_": "book.fill"
},
// Truncated for brevity...
]

 

Creating the ContentView


This time we can see some various changes to our content view. Let's focus on the loadInitialData method. We retrieve our json data and reality file urls for our RealityFileStrategy. Then attempt to load the data with a try and pass the strategy into the ViewModel.

Unlike the image anchor the object anchor doesn't need the object data passed in with it. The reality file stores the arobject file internally and FioriARKit can configure it under the hood.

Note: The image that's passed into the SingleImageARCardView is what's displayed on the ScanningView and this is an example of why its independent of the Image Anchor in the example above. We now need an image of the ps4 controller to coach the user on what to scan.
struct BlogPostObjectContentView: View {
@StateObject var arModel = ARAnnotationViewModel<DecodableCardItem>()

var body: some View {
SingleImageARCardView(arModel: arModel,
image: Image("ps4Controller"),
cardAction: { id in
// set the card action for id corresponding to the CardItemModel
print(id)
})
.onAppear(perform: loadInitialData)
}

func loadInitialData() {
guard let jsonURL = Bundle.main.url(forResource: "PS4ControllerData", withExtension: "json"),
let realityURL = Bundle.main.url(forResource: "PS4Reality", withExtension: "reality") else { return }

do {
let jsonData = try Data(contentsOf: jsonURL)
let strategy = try RealityFileStrategy(jsonData: jsonData, realityFilePath: realityURL, rcScene: "PS4Scene")
arModel.load(loadingStrategy: strategy)
} catch {
print(error)
}
}
}


Object Anchor Demo App


 

Conclusion


FioriARKit abstracts a lot of complex implementation away, yet the marker positions and card data still need to be provided. The data can have different sources/formats and brought together in a variety of combinations, especially for an Augmented Reality experience. Using the strategy pattern we have vastly simplified this process for different scenarios. Perhaps there are unknown or future ways of sourcing data that can be loaded. As long as we can conform to the AnnotationLoadingStrategy protocol and return a list of ScreenAnnotations we can achieve this.