Augmented Reality with Swift 5 – How to Start

Augmented Reality with Swift 5 – How to Start

Imagine that the border between the virtual and the real world has disappeared.

Augmented reality is the result of using technology for info overlay, like sounds, images, texts, onto the real world we see.

Popular Apple AR apps

Measure

With iOS 12 you can leave your tape measure in the back of a kitchen drawer. After the iOS 12 installation, you’ll see one of the new augmented reality apps from Apple called Measure. The app uses AR as a ruler, measuring the objects around you with the help of a telephone camera.

Just a Line

Just a Line allows you to create simple pictures in the augmented reality and then share your creation in the form of a short video. Draw alone or with a friend, then click «record» and share what you’ve done with #justaline.

Knightfall: AR

Knightfall AR — it’s the experience of the augmented reality which transfers us into the Knights of the Temple world, who defend the city of Acre from an invading army and protect Christianity’s most precious relic, the Holy Grail. Use your wits and sharp eyes to kill as many enemy warriors as you can before they breach the walls.

System requirements

Apple has brought augmented reality to the masses, starting from iOS 11. Since iOS 11 launch hundreds of millions of iPhones and iPads support AR. It immediately makes Apple ARKit the biggest AR platform worldwide.

In case you are interested in AR app development for iOS, you are in the right place.

ARKit 2

ARKit 2 for iOS developers was a platform introduced by Apple together with iOS 12. The developers can create interconnected spaces of augmented reality and constant objects linked to certain places as well as use object identification and images auto-tracing. We will use ARKit 2 for our demo project creation.

Project settings

New project creation

To start open Xcode and choose File > New > Project in the menu. Choose «Single View App» and click «Next». Xcode has an ARKit sample, but for better understanding, we’ll use «Single View App» sample for our new augmented reality app creation.

Camera Usage Permission

Now prior to launching our AR application we need to inform the user that we will utilize his device camera for augmented reality. This requirement has appeared since the launch of iOS 10. To do so open Info.plist, right-click the empty space and choose «Add row», then install " Privacy — Camera Usage Description" key and to finish it all add a description of why you need to use the camera.

Adding ARKit SceneKit View into the storyboard

Open Main.storyboard and choose ARKit SceneKit View from the library.

Drag it to the main controller and anchor to the sides.

Then having clicked and holding the right button on ARKit SceneKit View we link it with the controller. After @IBOutlet adding, we’ll see the mistake «Use of undeclared type ARSCNView», it happens, because we did not import the ARKit.

Simply adding import ARKit to our controller we’ll correct this.

ARSCNView Session Adjustment

We want our app to look at the world through the camera view and to start detecting the environment around us. Apple made augmented reality available for the developers without the necessity to create the whole technology from scratch. Thanks, Apple!

Nice. It’s time to adjust the ARKit SceneKit View. Fill the following code into the class ViewController:

 private let configuration = ARWorldTrackingConfiguration()
 
 override func viewDidLoad() {
        super.viewDidLoad()
        // Show statistics such as fps and timing information
        self.sceneView.showsStatistics = true
        self.sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]
    }
 
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        self.sceneView.session.run(configuration)
    }
 
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        self.sceneView.session.pause()
    }

The first line is the outer world tracking configuration. But stop what is it? «World Tracking Configuration» tracks the orientation and the position of the device. It also detects real surfaces, visible through the camera.

Than to debug viewDidLoad we installed sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]. It will allow to see how ARKit detects a surface. When we launch the app, we have to see lots of yellow spots on the surface. These spots allow estimating such properties as orientation and position of physical objects in the current base environment. The more spots we see in the area, the more chances we have that ARKit will be able to detectс and track the environment.

Further on to launch the scene before the screen appears, we start a session of our scene in viewWillAppear with our configuration. Before the screen in viewWillDisappear disappears we pause it. Let’s launch the app and see what we’ve got:

We can see the statistics in the lower part of the screen, it’s possible to make it more detailed clicking a +. We also see yellow spots, they indicate the places, where objects are detected.

AR axes

X —axis is often used for object positioning in the world, either on the right or on the left. Y-axis is used to position the objects in the upper or lower part and Z-axis is used to define how close the object is to the source.

The positive meaning of X will put an object to the right and the negative to the left. Y positive meaning will put the object in the upper part and Y negative in the lower, the positive Z will put the object closer to the user and the negative — further.

Adding a virtual object

Let’s now add a code for a virtual object adding to our project.

private var node: SCNNode!
 
func addBox(x: Float = 0, y: Float = 0, z: Float = -0.2) {
        // 1
        let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
 
        // 2
        let colors = [UIColor.green, // front
            UIColor.red, // right
            UIColor.blue, // back
            UIColor.yellow, // left
            UIColor.purple, // top
            UIColor.gray] // bottom
        let sideMaterials = colors.map { color -> SCNMaterial in
            let material = SCNMaterial()
            material.diffuse.contents = color
            material.locksAmbientWithDiffuse = true
            return material
        }
        box.materials = sideMaterials
 
        // 3
        self.node = SCNNode()
        self.node.geometry = box
        self.node.position = SCNVector3(x, y, z)
 
        //4
        sceneView.scene.rootNode.addChildNode(self.node)
    }

Let’s look at this code in more detail:

  1. We create a virtual object here in the form of a cube with a face of 0,1 meter. 1 float = 1 metre.
  2. We add a different color to each face and create a color array, then we create a material SCNMaterial for each face in a map and assign the received array to the materials of our cube.
  3. We create a SCNNode object, it represents the position and the coordinates of the object in three-dimensional space. The node itself does not contain any visual content. We add our cube to it.
  4. At last, we add our node to the scene.

Positioning and moving of a virtual object

It’s time to add our cube to the real world. Let’s try to do it so that the cube would be added to the screen in the place where we click.

First, we should add the function of detaching a click on our scene and call it in viewDidLoad().

private func addTapGesture() {
        let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))
        self.sceneView.addGestureRecognizer(tapGesture)
    }

Then we’ll add the code, which will find the surface where to put our cube after the click and add the cube onto the scene if it’s not yet there or move it if it’s already there.

  @objc func didTap(_ gesture: UIPanGestureRecognizer) {
        // 1
        let tapLocation = gesture.location(in: self.sceneView)
        let results = self.sceneView.hitTest(tapLocation, types: .featurePoint)
 
        // 2
        guard let result = results.first else {
            return
        }
 
        // 3
        let translation = result.worldTransform.translation
 
        //4
        guard let node = self.node else {
            self.addBox(x: translation.x, y: translation.y, z: translation.z)
            return
        }
        node.position = SCNVector3Make(translation.x, translation.y, translation.z)
        self.sceneView.scene.rootNode.addChildNode(self.node)
    }
 
//5
extension float4x4 {
    var translation: float3 {
        let translation = self.columns.3
        return float3(translation.x, translation.y, translation.z)
    }
}

Let’s look into this code:

  1. We define the position of the clicking onto the screen and passing its function to hitTest of our scene, we get a massive ARHitTestResult, which defines the place of contact with the surface.
  2. In case the points of contact have been found, we receive the first one and move further on, otherwise, we complete the function.
  3. Using the worldTransform field of the point of contact, we determine the coordinates of this point in the coordinate system of the real world.
  4. If the object exists, we change its position, otherwise, we call the function of adding and pass the coordinates of the point of contact to it.
  5. It’s the extension for float4×4, which makes the work with coordinates more convenient.

On running the code and clicking on the surface, we will see our multi-colored cube:

Scaling an object

Let’s now add the function of our cube scaling. We’ll make our cube stretch and narrow on pinch gesture.

Like in the previous example, we’ll add a function to determine pinch and call it in viewDidLoad().

   private func addPinchGesture() {
        let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(didPinch(_:)))
        self.sceneView.addGestureRecognizer(pinchGesture)
    }
 

Then we’ll add a function that will stretch and narrow our cube to a certain size.

  @objc func didPinch(_ gesture: UIPinchGestureRecognizer) {
 
        switch gesture.state {
        // 1
        case .began:
            gesture.scale = CGFloat(node.scale.x)
        // 2
        case .changed:
            var newScale: SCNVector3
	// a
            if gesture.scale < 0.5 {
                newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5)
	// b
            } else if gesture.scale > 3 {
                newScale = SCNVector3(3, 3, 3)
	// c
            } else {
                newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale)
            }
	// d
            node.scale = newScale
        default:
            break
        }
    }

Let’s look at this code:

  1. Firstly- take pinch gesture and add the scale of our cube to it for the cases when the cube was stretched already and we want to continue scaling and not start it from the beginning.
  2. During the stretching or narrowing:
  • If the scale is less than 0.5 we leave it at the level 0.5
  • If the scale exceeds 3 we leave it at the level 3
  • In all the other cases we set the scale which corresponds to the pinch gesture.
  • Set the scale to our node.

Rotating an object

Now let’s add the function of our object rotation around the Y-axis. We’ll make our cube rotate around Y-axis on rotation gesture.

Like in the previous example, we’ll add a function to determine the rotation gesture and call it in viewDidLoad().

 private func addRotationGesture() {
        let panGesture = UIRotationGestureRecognizer(target: self, action: #selector(didRotate(_:)))
        self.sceneView.addGestureRecognizer(panGesture)
    }

And the rotation of the cube itself:

  private var lastRotation: Float = 0
 
    @objc func didRotate(_ gesture: UIRotationGestureRecognizer) {
        switch gesture.state {
        case .changed:
            // 1
            self.node.eulerAngles.y = self.lastRotation + Float(gesture.rotation)
        case .ended:
            // 2
            self.lastRotation += Float(gesture.rotation)
        default:
            break
        }
    }

Let’s look at this code:

  1. We change the angle Y in the node.
  2. We save the last value of the rotation.

The future of ARKit

At the developers’ conference WWDC 2019 Apple presented a renewed platform ARKit 3. ARKit 3 goes further than ever before using People Occlusion and showing AR content to people in a natural way, in front or behind them, tracking up to 3 people at a time, supporting session sharing and much more. Using the ARKit new knowledge about people, it will be possible to integrate people’s movement into the app.

Conclusion

Congratulations! We’ve created a small app with AR. Now you know how to develop AR apps, make a project setup, add virtual objects, change their position, scale and rotate them. ARKit is a vast topic in itself and this article has only touched on some basics, yet, it’s only the tip of the great iceberg called ARKit. I hope the info was of use to you and wish you good luck in your endeavors.

You can find the full demo project on Bitbucket