• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
FacebookTwitterLinkedInInstagramYoutube
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Publications
  • /Tech News
  • /Trends
  • Home
  • / ...
  • /Tech News
  • /Trends

Run:AI launches ResearcherUI, announces support for Kubeflow, Apache Airflow, and MLflow

By IEEE Computer Society Team on
September 2, 2021

Run:AIRun:AIRun:AI, leading compute management platform for the orchestration and acceleration of AI, announced the launch of a new ResearcherUI, as well as integration with machine learning tools including Kubeflow, MLflow and Apache Airflow. The new UI option is a part of Run:AI's "Run:it your way" initiative, enabling data scientists to choose their preferred ML tools that manage modeling and other data science processes on top of Run:AI's compute orchestration platform.

"Some data scientists like Kubeflow; some prefer MLFlow; some would rather use YAML files. We even heard of a Fortune 500 company that uses 50 different data science tools. With Run:AI, there's no need to force all data science teams to use a specific ML tool in order to take advantage of the Run:AI GPU orchestration platform," said Omri Geller, CEO of Run:AI. "Instead, each team can "Run:it their way", sharing pooled, dynamic GPU resources while using the best ML tools to match the company's data science workflow."


Want more tech news? Subscribe to ComputingEdge Newsletter Today!


There are dozens of data science tools used to run experiments, and naturally some data scientists are more comfortable with one tool or another. Run:AI dynamically allocates GPU to data science jobs across a whole organization, regardless of the ML tools they use to build and manage models. Teams can have guaranteed quotas, but their workloads can use any available idle GPU resources, creating logical fractions of GPUs, stretching jobs across multiple GPUs and multiple GPU nodes for distributed training, and maximizing hardware value for money.

With "Run:it your way", Run:AI supports all popular machine learning platforms including, but not limited to, Kubeflow, Apache Airflow, MLflow, API support (including for air-gapped data science environments), YAML, Command Line, and Run:AI's new ResearcherUI.

About Run:AI

Run:AI is a cloud-native compute management platform for the AI era. Run:AI gives data scientists access to all of the pooled compute power they need to accelerate AI development and deployment – whether on-premises or in the cloud. The platform provides IT and MLOps with real-time visibility and control over scheduling and dynamic provisioning of GPUs to deliver more than 2X gains in utilization of existing infrastructure. Built on Kubernetes, Run:AI enables seamless integration with existing IT and data science workflows. Learn more at www.run.ai.

LATEST NEWS
Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration
Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration
IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age
IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age
Platform Engineering: Bridging the Developer Experience Gap in Enterprise Software Development
Platform Engineering: Bridging the Developer Experience Gap in Enterprise Software Development
IEEE Std 3158.1-2025 — Verifying Trust in Data Sharing: Standard for Testing and Performance of a Trusted Data Matrix System
IEEE Std 3158.1-2025 — Verifying Trust in Data Sharing: Standard for Testing and Performance of a Trusted Data Matrix System
IEEE Std 3220.01-2025: Standard for Consensus Framework for Blockchain System
IEEE Std 3220.01-2025: Standard for Consensus Framework for Blockchain System
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter
Read Next

Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration

IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age

Platform Engineering: Bridging the Developer Experience Gap in Enterprise Software Development

IEEE Std 3158.1-2025 — Verifying Trust in Data Sharing: Standard for Testing and Performance of a Trusted Data Matrix System

IEEE Std 3220.01-2025: Standard for Consensus Framework for Blockchain System

Mapping the $85B AI Processor Landscape: Global Startup Surge, Market Consolidation Coming?

AI Agentic Mesh – A Foundational Architecture for Enterprise Autonomy

IEEE O.C A.I “DEVHACK” Hackathon 2025 Winner Celebration