top of page
Coding Station

Simpler, safer and cost effective migration to Elastic cloud


Are you looking for a simpler, safer and cost effective way to migrate from a self-managed deployment into Elastic cloud?


The documentation and blog posts on the subject can be a little worrying, one of the the statements warns of time lag and loss of data. Its a cut over. The data in transit during that switch could be lost.

Here is a better, easier and safer way. One that will actually make the move a lot less costly too. We can take the snapshot approach too, this method can actually help you mitigate the data loss issue.


If we start with the current self-managed, deployment, all the data ingest is happening on-premise and the target is to move without losing a single line of data. Obviously there are hundreds of data types and ingest flavours, the diagram is more for guidance only.


  1. Introduce a Pipeline. Simply update your beats to send data via the pipeline. Its easy, update the config and restart. Zero risk of losing data, NONE. This forwards data to Elastic in a pass-through mode and saves all a copy of your data in local S3 Storage before forwarding to its original destination. Sounds complex, time consuming, might add delay....but to do this, it adds no more than 2 seconds from left to right.



2. Add a logiq.ai cluster destination in the cloud. Whilst all of your data is being forwarded to the original self-managed Elastic and the local Logiq.ai cluster hosting the pipeline, the cloud cluster will help us move your data and test a few things in advance.


3. Pull in historic data from the self managed environment. To complete the dataset in the local object storage, Logiq.ai needs to pull in historic data from the self managed environment. This can be done quickly and once complete, we have a mirror copy of your data that is being populated in real-time.


4. Push sample data to the cloud. Data Migration is a fundamental and critical process. We are now entering into a phase where we can start to validate data and start to migrate any existing configuration. Kibana dashboards, alert integration, and user access are important and they need testing with your real data. Remember that you don’t want to overwhelm Elastic Cloud or your network with huge data transfers just yet. This is a gradual process and a full day or week of data will suffice depending on volume..and testing progress of course.




5. Regulation and filtering . This is the part where we start to look at your data in the existing self-managed environment. We want to separate signal from noise, prioritise what is important and identify what has been collected and stored just because you had too. We have blogged on this subject before, where we can help you save up to 90% on ingest costs. If we can do the same here, it means that the data we eventually start to to push in real time will be drastically reduced. Your Dashboards, alerting, reporting and automation will all work as expected, and it will cost you a lot less. We think your network administrators will be happier too.


This task is the most time consuming, each index needs to be validated and we create a set of queries that will include the important data to push forwards. Note - we take care of this for you. All the queries will run from Logiq.ai and not your self-managed environment, so there is no overhead on your live system.


Its a gradual process, as we streamline and filter your data, we will start to create forwarding rules to start populating the Elastic instance in the cloud. We repeat this cycle until we have all critical data feeding your cloud instance. We will get to the point where we parallel run both, until you can migrate all users to the new cloud instance.


6. Decommission the self-managed instance. Now that your cloud instance has taken over, its now time to shutdown and remove the existing self-managed instance. Your data sources will still continue to operate and 100% of your data is stored in local S3 Storage. The data being pushed to the cloud is efficient, regulated, enriched and a small fraction of the original load.


Introducing the pipelines reduces storage costs and gives you the control to decide what data is forwarded and what data stays behind. Everything is available when you need it, searchable and forwardable if required.

This approach takes a lot of the risk and cost away from you. With this innovative solution, you can have a "keep it local" approach that pushes the metadata to the cloud but keeps the actual data at your location, reducing network costs and future risk of data loss. You also have the power of choice in the future. If you wanted to try another solution as an evaluation or to compliment Elastic in some way, then having all the data and the power to forward....is priceless.


For more information, please email hello@visibilityplatforms.com and our team will be on contact shortly.






Comments


bottom of page