Serdar Yegulalp
Senior Writer

Azure’s new machine learning features embrace Python

news
Sep 25, 20182 mins

New tools for Azure ML include integration with Python environments and self-tuning training for machine learning models

ai machine learning iot framework vr
Credit: Thinkstock

Microsoft ahas several new additions to its Azure ML offering for machine learning, including better integration with Python and automated self-tuning features for faster model development.

Python is a staple language for machine learning, thanks to its low barrier to entry and its wide range of machine learning libraries and support tools. Azure’s offering with Python is a new SDK that lets Azure ML connect to a developer’s existing Python environment.

This SDK comes with the azureml-sdk package that can be installed using Python’s pip package manager. Most Python environments, from a generic Python install to data-science-specific distributions like Anaconda Python or a Jupyter notebook, can connect to Azure ML this way.

Tools offered through the SDK include data preparation, logging the results of experiment runs, saving and retrieving experiment data from Azure blob storage, automatically distributing model training across multiple nodes, and ways to automatically create various execution environments for jobs, such as remote VMs, Docker containers, and Anaconda environments.

Another new Azure ML feature supported by the new Python SDK is automated machine learning. The underlying concept isn’t new—it’s a form of hyperparameter optimization, or a way to automatically tune the parameters used for a particular machine learning model training system to yield better results.

Microsoft describes it as “a recommender system for machine learning pipelines. Similar to how streaming services recommend movies for users, automated machine learning recommends machine learning pipelines for data sets.” Microsoft also claims the automation can be done without looking directly at sensitive data, and thus preserve users’ privacy.

Other new features include:

  • Distributed deep leaning, to allow models to be automatically trained on a cluster of machines without having to configure the cluster.
  • Hardware-accelerated inferencing, which uses FPGAs to speed the serving of inferences from models.
  • Model management via CI/CD, so that Docker containers can be used to manage trained models.
Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author