Stop Thinking, Just Do!

Sungsoo Kim's Blog

Introduction to Data Mesh

tagsTags

8 January 2023


Article Source


Introduction to Data Mesh

Abstract

Zhamak Dehghani works with ThoughtWorks as the director of emerging technologies in North America, with a focus on distributed systems and data architecture, and a deep passion for decentralized technology solutions.

Zhamak serves on multiple tech advisory boards including ThoughtWorks. She has worked as a technologist for over 20 years and has contributed to multiple patents in distributed computing communications, as well as embedded device technologies.

She founded the concept of Data Mesh in 2018 and since has been implementing the concept and evangelizing it with the wider industry. She is the author of Architecture the Hard Parts and Data Mesh O’Reilly books.

About Data Mesh

For over half a century organizations have assumed that data is an asset to collect more of, and data must be centralized to be useful. These assumptions have led to centralized and monolithic architectures, such as data warehousing and data lake, that limit organizations to innovate with data at scale.

Data Mesh as an alternative architecture and organizational structure for managing analytical data. Its objective is enabling access to high quality data for analytical and machine learning use cases - at scale.

It’s an approach that shifts the data culture, technology and architecture

  • from centralized collection and ownership of data to domain-oriented connection and ownership of data
  • from data as an asset to data as a product
  • from proprietary big platforms to an ecosystem of self-serve data infrastructure with open protocols
  • from top-down manual data governance to a federated computational one.

About Deep Data Research Computing Center

We bring in computer scientists’ expertise and need to validate their work to help biologists and biometricians scale research. Over the years, the lowering costs of sequencing leads to an overwhelming amount of biomedical data generated. While this helps biologists and biometricians unlock deeper insights, they often find themselves lack the time or knowledge to acquire, store, and analyze data in a scalable way.

Our multidisciplinary team builds cost-efficient systems for research studies with large number of participants and different types of data, with the aim to collect, store and analyze data across multiple platforms, while preserving fast turnaround times. By using robust de-identification and end-to-end encryption methods, we reduce the risk of violating privacy and security regulations.

Our vision is the ability to collect and utilize more data at ease accelerates the transformation from conventional medicine to deep medicine. In conventional medicine, we are considered sick when our health status diverts from the population’s average. Deep medicine reveals the wide individual differences by investigating a variety of domains, and unlocks personalized insights to prevent diseases and manage daily health.

Connect with us:

Website: https://deepdata.stanford.edu/


comments powered by Disqus