Core ML brings machine learning to Apple developers

From InfoWorld: Earlier this week Apple unveiled Core ML, a software framework for letting developers deploy and work with trained machine learning models in apps on all of Apple’s platforms—iOS, MacOS, TvOS, and WatchOS.

Core ML is intended to spare developers from having to build all the platform-level plumbing themselves for deploying a model, serving predictions from it, and handling any extraordinary conditions that might arise. But it’s also currently a beta product, and one with a highly constrained feature set.

Core ML provides three basic frameworks for serving predictions: Foundation for providing common data types and functionality as used in Core ML apps, Vision for images, and GameplayKit for handling gameplay logic and behaviors.

Each framework provides high-level objects, implemented as classes in Swift, that cover both specific use cases and more open-ended prediction serving. The Vision framework, for instance, provides classes for face detection, barcodes, text detection, and horizon detection, as well as more general classes for things like object tracking and image alignment.

View: Article @ Source Site