Your CIO made a strategic decision to move to AWS and you are wondering how to move your Enterprise Data Warehouse (EDW) on Teradata: should you opt for using Teradata Software Tiers ...
Featured Technologies
Check out the technologies we support to help your team optimize moving data from your data center to AWS, between AWS services, and even between AWS and other cloud platforms.
We are the only AWS partner who can say we've migrated more databases to AWS RDS, Amazon Aurora, and Amazon Redshift than any other partner in the world using AWS Database Migration Service (DMS) and Schema Conversion Tool (SCT).
See how we can orchestrate the transformation of your data, leveraging the simple, flexible, and cost-effective AWS Glue ETL service.
Configuring Amazon Data Pipeline web service, you can process the data that was locked up in on-premises storages and then efficiently transfer the results to different AWS services.
Learn how DB Best can help you analyze data in Amazon S3 by getting the most out of interactive server-less AWS Athena query service.
Learn how DB Best can help you collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information with Amazon Kinesis.
DB Best can guide your business to run and scale Apache Hadoop, Spark, HBase, Presto, Hive, and other Big Data Frameworks in the Amazon cloud leveraging Amazon EMR service.
Additional tools we've used with AWS
Data integration platforms now support moving data between your data center and AWS. Our team has worked with the following AWS data integration solutions, and have a deep understanding of how to build hybrid solutions to optimize performance.
Open source data integration solutions for AWS
For organizations looking to use open-source data integration solutions, our team supports the following Apache solutions for performing data integration tasks and storage. We can also migrate solutions using the Apache technologies to AWS data integration tools like AWS Glue to take advantage of serverless computing, performance, security, and integration with other AWS solutions.
Data Stores
- Apache Hadoop is a distributed computing platform. This includes the Hadoop Distributed Filesystem (HDFS) and an implementation of MapReduce. Implemented on AWS as Elastic Map Reduce (EMR).
- Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable. Amazon Redshift supplies similar capabilities.
- Apache Hive is a data warehouse software which facilitates querying and managing large datasets residing in distributed storage with tools to enable easy data extract/transform/load (ETL) to HDFS and other data stores like HBase. Implemented as Amazon Athena.
- Apache CouchDB is a database which completely embraces the web by storing your data with JSON documents. Implemented on AWS as DynamoDB.
- Apache Spark is a fast and general engine for large-scale data processing. It offers high-level APIs in Java, Scala and Python as well as a rich set of libraries including stream processing, machine learning, and graph analytics.
- Apache Cassandra database provides scalability and high availability with linear scalability and fault-tolerance on commodity hardware or cloud infrastructure.
Complex Event Processing
- Apache Storm is a distributed real-time computation system. Similar to how Hadoop provides a set of general primitives for doing batch processing, Storm provides a set of general primitives for doing real-time computation. Implemented as AWS Kinesis.
- Apache Beam is a unified programming model for both batch and streaming data processing, enabling efficient execution across diverse distributed execution engines and providing extensibility points for connecting to different technologies and user communities. Implemented as Amazon Glue.
General Data Processing
- Apache Sqoop is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. AWS Glue provides this capability.
- Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store.
- Apache Kafka is a distributed, fault tolerant, publish-subscribe messaging that can handle hundreds of megabytes of reads and writes per second from thousands of clients.
Using AWS Snowball Edge to transfer multiple terabytes of data into the Amazon cloud
One of the main concerns during large-scale database migrations to the cloud is how long the data transfer may last. When you need to move multiple terabytes of data, the migration process may last for weeks or even months. In addition, the bandwidth of your network connection becomes a limiting factor, with some security concerns possibly appearing.
So, the whole migration project becomes unsustainable, causing many customers with heavy-weight databases to abandon their cloud migration initiatives. Amazon came up with a physical solution called AWS Snowball Edge, which allows for fast and secure data transfer of up to 80 TB of data in a matter of days.
We had a great opportunity to test the latest AWS Snowball Edge device at our data-center. Being half the size of the original AWS Snowball, the latest version of the appliance can store up to 83 TB of data. This allows for speeding up large-scale data transfers, even taking into account the device shipping time.
Managing data and applications anywhere, we often face issues related to migration of huge amounts of data for our customers. So, as an Amazon partner, we received the brand new AWS Snowball Edge for testing purposes and tried to migrate our Oracle Database to Amazon Aurora PostgreSQL. Watch the following video to learn more about our experience with AWS Snowball Edge.
Learn more
Blog posts
Check out the following blog posts to learn about some of the solutions we’ve built using the AWS data integration technologies.
One of the main concerns during large-scale database migrations to the cloud is how long will the data transfer last. When you need to move multiple terabytes of data, the migration pr...
Sharing our technical experience with the readers, we've reached a pretty impressive audience on our blog. So, it was pretty easy to distinguish the most popular AWS Database Migration...
This post continues our video blog series on the AWS Schema Conversion Tool (SCT). In our previous blog posts, we talked about using AWS SCT for transactional database migration projec...