iOS 13 — is one of the best mobile operating systems worldwide. Apple iOS 13 release entailed the launch of some great development tools and we are going to speak about them in this article.
So, what’s new?
- Tools for augmented reality surrounding creation — ARKit 3, Reality Composer and RealityKit.
- Solutions for machine learning — Core ML 3 and the new application Create ML.
- The possibility to authorize in your apps through Apple ID.
- Siri was improved and Siri Shortcuts added.
- The creation of user interface has become much more simple and quick with SwiftUI release.
It has become much easier to create the augmented reality surroundings with iOS 13 new features such as ARKit 3, Reality Composer and RealityKit. Let’s see what has been improved in ARKit and what the tools are Reality Composer and RealityKit.
ARKit 3 New Opportunities:
- Motion Capture. The intellectual feature of people’s movements tracking and capturing was added.
- Human interaction. It has now become simple to define a person’s position in the AR environment, which has made it possible to place objects around him.
- Front and back camera working simultaneously. Not only back camera can work in applications at present, but also the front one. It reveals even greater opportunities than earlier.
- ARKit Face Tracking. It is able to tracks up to three faces at a time using the new TrueDepth camera.
- Collaborative sessions. Now you can run the same augmented reality environment on several devices and interact with it. It gives great opportunities for development, especially for multi-user games.
- Other improvements. 3D object detection in complex environments was improved.
The RealityKit platform was created for 3D modeling and rendering. RealityKit uses the information which is provided by the ARKit platform to integrate models and 3D objects into the real world.
- 3D content import, which can be created with the Reality Composer.
- Audio content import into the AR environment.
- Physics simulations and animations adding to the virtual objects.
- Adding triggers for the users’ actions or AR environment changes.
- Synchronization across various devices, to run collaborative sessions.
Reality Composer is a new and very powerful Apple’s application for creating 3D content and AR environments. There is an opportunity to add behavior to 3D models, audio content and animations with a description of how these objects should behave. Besides you can export all these then to your applications, which use the RealityKit mentioned above. Reality Composer is supplied with Xcode 11 or later.
Let’s see what’ new in the Machine Learning sphere offered by Apple. It’s certainly Core ML 3 and Create ML.
Core ML 3
Core ML 3 has begun to support the enormous neural networks with over 100 layer types. Besides the performance has been improved due to the more effective Graphic Processor and Neural Engine usage.
Core ML 3 Opportunities:
- On-device training. At present all the Core ML models can be updated with user data right on the device. It allows the models to stay relevant, preserving user privacy at the same time.
- Computer vision. It’s a very useful feature for the apps using Machine learning. We can now detect faces and movements, recognize speech and much more.
- API Camera Document. A new API for document detection and conversion using the camera of the device.
- Speech. Speech recognition has become more intellectual and operates for main languages right on the device. The user can get metadata about the pronunciation, find the necessary phrases in speech and much more.
- Natural Language. The analysis of big texts is now possible and the user can get the necessary data from them. At present, the service is available for English, French, German, Italian, simplified Chinese and Spanish languages.
Create ML app — is an extremely convenient application for creation, training and deploying the machine learning models.
Create ML Opportunities:
- Model Templates. The model has built-in models for faces and objects, sounds and movement detection.
- Multi-model Training. There is an opportunity to train several models at a time, providing them with different data sets.
- Preview. There is an opportunity to test a model before deploying it with the help of the preview, which will also evaluate it and provide analysis for the model.
- eGPU Training Support. Now the models can be trained by an external graphics processing unit usage on your Mac.
Siri — is one of the best intelligent assistants worldwide. It’s certainly not Jarvis from Iron Man, but it’s quite a good means of interaction with a mobile device. Shortcuts were added in iOS13 and Siri settings were expanded.
Shortcuts allow to interact with applications with voice or tapping the settings in the Shortcuts app. You can now create the shortcuts you need, so that they perform certain functions in your app when it’s launched, order food or open a card, for instance.
Siri was added the opportunity to ask questions for more information. It will allow your applications to interact with the users in a more intellectual way. For example, when a user says: «Open the map and create a route to work.» Siri may ask what he will get to work by and provide a list of the transport means available.
Siri now studies the users and offers the shortcuts for the application based on thу results. However, it’s not necessary to worry about the privacy, Siri trains locally, on your device only, so your data is confidential and secure.
SwiftUI — is a new mechanism for the user interface creation, utilizing the Swift we like so much.
Declarative syntax. SwiftUI is declarative, we can indicate how our UI should look like in quite a simple way, like this:
Image(post.avatar) .resizable() .clipShape(Circle()) .frame(width: 50, height: 50) .clipped()
Declarative style is also used for animation, just one line is enough to add to the code for your image:
Image(post.avatar) .resizable() .clipShape(Circle()) .frame(width: 50, height: 50) .clipped() .animation(.easeInOut())
- Dynamic replacement. Everything you change on the canvas, in the Xcode design tool on the right, changes in the code, as well as the changes in the code are reflected on the canvas.
- Preview. You now have the opportunity to preview several screens with various settings of orientation, extension, fonts, and many more parameters simultaneously.
- Native on all platforms. SwiftUI provides excellent performance on all Apple platforms, while you can use all the native design elements, all Apple users have got used to so much.
Privacy and Security
Sign in with AppleID
iOS 13 has pleased us with the opportunity to log in to the apps with Apple ID. It’s a perfect way to identify the user, which is concerned with the privacy and security of a person. To provide security all Apple ID accounts are protected with two-step authentication and Apple states it does not track the users’ activities in the applications. Face ID and Touch ID can also be used for the authorization in the apps. Since almost every Apple user has his account it helps to save time for authorization and the users will faster get into your app. It created a positive impression of the UX.
- Privacy. Sign in with Apple ID was developed with the use of the best techniques of user security provision. The data collected by Apple is limited to an email and a username. Besides, Apple does not track the users’ activity in the app. For those who want to keep their email a secret, there is such an opportunity through the email relay feature, which allows the users to receive emails incognito.
- Security. Every Apple ID, using the Sign in with Apple, is automatically protected with the two-step authentication and, besides, there is an opportunity to use Face ID or Touch ID.
- Antifraud. AppleID sign in uses machine learning and neural networks to define the real users and accounts, so that you can easily detect a fake account and take the necessary actions.
- Multi-platformity. Sign in with Apple works on iOS, macOS, tvOS, and watchOS.
- Entrepreneurs-oriented. It has become easier to integrate iPhones and iPads into the existing corporate authentication networks due to the greater attention to confidentiality and security.
- Camera — Portrait Segmentation API was renovated, it allowed to create really cool effects for the photos made from apps.
- MapKit overlays and filtering by Points of Interest have been improved.
- Location — the security aspect was improved.
We have got acquainted with the new possibilities of development in iOS 13, which have pleased us a lot. Let’s revise:
- ARKit 3
- Reality Composer
- Core ML 3
- Create ML
- Sign in with Apple
- Camera, maps and location improvements.
Hope you liked the overview and now go and implement the new possibilities of iOS13 into life. Good luck to you!