Scaling Python Data Science with Dask

With the growing significance of big data for data science and machine learning, scaling data work is more important than ever.Dask is an open source library for parallel computing in Python. It provides a complete framework for distributed computing and makes it easy for data professionals and DevOps engineers to scale their workflows quickly. Dask is used in a wide range of domains from finance and retail to academia and life sciences. It is also leveraged internally by numerous special-purpose tools like XGBoost, RAPIDS, PyTorch, Prefect, Airflow and more.In this session, you will:
• Learn about Dask, what it can and can’t do, how it works and who uses it
• See how Dask augments traditional database query engines with more advanced machine learning capabilities, and how these technologies can be leveraged to work with data lakes
• See real-world examples, including data science pipelines at Capital One and ML workflows at Walmart
• Understand both the power and simplicity of using Dask for your own projectsHowever, deploying distributed systems in the cloud is hard. We’ll finish by discussing the design behind Coiled, a Dask cloud service designed to provide scalable Python with minimal fuss.

Topics Covered

Data Lake Engines
get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Enable the business to accelerate AI and analytics with AI-ready data products – driven by unified data and autonomous performance.