Skip to main content

AWS : Boto3 (Create, Delete RDS using Python)

Below code is to delete a existing RDS and also to create a new RDS in AWS RDS using boto3 python package :

import boto3

# Creating a client session for RDS using region name, aws_access_key_id & aws_secret_access_key
client = boto3.client('rds', region_name="ap-south-1",
aws_secret_access_key = 'YOUR_AWS_SECRET_ACCESS_KEY',
aws_access_key_id = 'YOUR_AWS_ACCESS_KEY_ID')

# Deletig an existing instance
# DB instance ID is enough, make sure to skip final snapshot & delete any automated backup's
response = client.delete_db_instance(
DBInstanceIdentifier='newpoc',
SkipFinalSnapshot=True,
DeleteAutomatedBackups=True
)

# To cross check if any RDS is available
response = client.describe_db_instances()
print(response)

# To create a new RDS in AWS
# DBInstanceIdentifier is the name of RDS
# Engine must be your expected RDBMS name
# Provide user name and password using MasterUsername & MasterUserPassword properties
response = client.create_db_instance(
DBName='mysqldbtestdb',
DBInstanceIdentifier='newpoc1',
AllocatedStorage=20,
DBInstanceClass='db.t4g.micro',
Engine='mysql',
MasterUsername='admin',
MasterUserPassword='Mypassword.1',
PubliclyAccessible=True
)


GitHub location to get above python code :

https://github.com/amathe1/boto3_project/blob/main/boto3_module/b3_RDS_create_delete.py


Have a great day!


Arun Mathe

Gmail ID : arunkumar.mathe@gmail.com

Contact No : +91 9704117111





















Comments

Popular posts from this blog

AWS : Working with Lambda, Glue, S3/Redshift

This is one of the important concept where we will see how an end-to-end pipeline will work in AWS. We are going to see how to continuously monitor a common source like S3/Redshift from Lambda(using Boto3 code) and initiate a trigger to start some Glue job(spark code), and perform some action.  Let's assume that, AWS Lambda should initiate a trigger to another AWS service Glue as soon as some file got uploaded in AWS S3 bucket, Lambda should pass this file information as well to Glue, so that Glue job will perform some transformation and upload that transformed data into AWS RDS(MySQL). Understanding above flow chart : Let's assume one of your client is uploading some files(say .csv/.json) in some AWS storage location, for example S3 As soon as this file got uploaded in S3, we need to initiate a TRIGGER in AWS Lambda using Boto3 code Once this trigger is initiated, another AWS service called GLUE(ETL Tool)  will start a Pyspark job to receive this file from Lambda, perform so...

Python : Python for Spark

Python is a general purpose programming language, that is used for variety of tasks like web-development, Data analytics etc. Initially Python is developed as a functional programming language, later object oriented programming concepts are also added to Python. We will see what basics we need in Python to play with Spark. Incase if you want to practice Spark in Big Data environment, you can use Databricks. URL :  https://community.cloud.databricks.com This is the main tool which programmers are using in real time production environment We have both Community edition(Free version with limited support) & paid versions available Register for above tool online for free and practice Indentation is very important in Python. We don't use braces in Python like we do in Java, and the scope of the block/loop/definition is interpreted based on the indentation of code. Correct Indentation : def greet():     print("Hello!")  # Indented correctly     print("Welcome ...

Spark Core : Understanding RDD & Partitions in Spark

Let us see how to create an RDD in Spark.   RDD (Resilient Distributed Dataset): We can create RDD in 2 ways. From Collections For small amount of data We can't use it for large amount of data From Datasets  For huge amount of data Text, CSV, JSON, PDF, image etc. When data is large we should go with Dataset approach     How to create an RDD ? Using collections val list = List(1, 2, 3, 4, 5, 6) val rdd = sc.parallelize(list) SC is Spark Context parallelize() method will convert input(collection in this case) into RDD Type of RDD will be based on the values assigned to collection, if we assign integers and RDD will be of type int Let's see below Scala code : # Created an RDD by providing a Collection(List) as input scala> val rdd = sc.parallelize(List(1, 2, 3, 4, 5)) rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:23 # Printing RDD using collect() method scala> rdd.collect() res0: Array[Int] = Array(1, 2, 3, 4...