How I Stopped Worrying & Learned to Love Data – Hortonworks (HDP)

How I stopped worrying and started to love the data – Meeting seasonal data peaks

Hortonworks (HDP) and Robin Systems Webinar

Deploying, right-sizing, ability to meet seasonal peaks without disrupting availability or supporting multiple clusters on a shared platform are often seen as the most difficult challenges in a Hadoop deployment.

In this webinar, we discuss these operational complexities that are often associated with Hadoop deployments and how they adversely impact the business. We will then look at the Robin Hyper-Converged Kubernetes Platform and see how it can help address these challenges.

Eric Thorsen is VP, Industry Solution at Hortonworks, with a specialty in Retail and Consumer Products

Eric holds over 25 years of technology expertise. Prior to joining Hortonworks, Eric was a VP with SAP, managing strategic customers in Retail and CP industries. Focusing on business value and impact of technology on business imperatives, Eric has counseled grocers, e-commerce, durables and hardline manufacturers, as well as fashion and specialty retailers.

Eric’s focus on open source big data provides strategic direction for revenue and margin gain, greater consumer loyalty, and cost-takeout opportunities.

Deba Chatterjee, Director of Products at Robin Systems, has worked on data intensive applications a for more than 15 years. In his previous position as Senior Principle Product Manager, Oracle Multi-Tenant, Deba worked on delivering mission critical solutions for one of the biggest enterprise databases.

At Robin Systems, Deba has contributed his significant experience to building the Robin Hyper-Converged Kubernetes Platform that delivers Bare Metal performance and application level Quality of Service across all applications to help companies meet peak workloads while maximizing off-peak utilization.

Meeting seasonal data peaks

Today, organizations are struggling to cobble together different open source software to manage Big Data environments such as Hadoop or build an effective data pipeline that can withstand the volume as well as the speed of data ingestion and analysis.

The applications used within the big data pipeline differ from case to case and almost always present multiple challenges. Organizations are looking to do the following:

  • Harness data from multiple sources to ingest, clean, & transform Big Data
  • Achieve agile provisioning of applications & clusters
  • Scale elastically for seasonal spikes and growth
  • Simplify Application & Big Data lifecycle management
  • Manage all processes with lower OPEX costs
  • Share data among Dev, Test, Prod environments easily

Robin Solution – Simple Big Data application & Pipeline Management

Robin Hyper-Converged Kubernetes Platform provides a complete out-of-the-box solution for hosting Big Data environments such as Hadoop in your big data pipeline on a shared platform, created out of your existing hardware – proprietary/commodity, or cloud components.

Robin container-based Robin Hyper-Converged Kubernetes Platform helps you manage Big Data and build an elastic, agile, & high-performance Big Data pipeline rapidly.

  • Deploy on bare metal or on virtual machines
  • Rapidly deploy multiple instances of data-driven applications
  • No need to make additional copies of data

Robin for Big Data

Robin Sytems on Vimeo

Share with: