Skip to main content

AWS Kinesis - Stream, Firehose, Analytics Overview

AWS Kinesis:
  • AWS Kinesis is managed alternative of Apache Kafka.
  • It can be used for big data real-time stream processing.
  • It can be used for applications logs, metrics, forecast data, IoT.
  • It can be used for streaming processing framework like Spark, NiFi, etc.  
Kinesis Capabilities:
  • Kinesis Streams: Streaming data ingest at scale with low latency. It is a data stream.
  • Kinesis Analytics: To perform analytics on real-time streaming data using SQL. You can filter or aggregate data in real time.
  • Kinesis Firehose: To load streams of data into S3, Redshift, Splunk or Elastic Search. It is a delivery stream.


Kinesis Data Streams:
  • Streams are divided into shards. To scale up application we can update shard configuration by increasing number of shards. By default shard's data can be retained for 1 Day but you can extend it for 7 days.
  • Multiple application can use same stream.
  • Real-time processing of data with a scale of throughput.
  • Record size should not be more than 1 MB.

Kinesis Stream Shards:

  • 1 stream can have many shards but 1 shard has a capacity of 1 MB/sec or 1000 Records/sec to write. So producer with 1 shard can write up to 1MB data or 1000 records per second which met first. 1 Shard has read speed of 2 MB/sec. AWS will bill only for shards you have used. You can increase or decrease shards over time.
Kinesis Stream Apis:
  • PutRecord: To put 1 record into a shard.
  • PutRecords: To put a batch of records into shards.
  • Each record has data along with partition key. So each record having same partition key will go to the same shard. We have to specify partition key by keeping in mind that shard have 1 MB/sec or 1000 record/sec limit and data stream will not overflow this limit unless ProvisionedThroghputExceeded exception will throw.
  • For example, if you are streaming a user data then there can user_id as a partition key. 
ProvisionedThroghputExceeded Exception:
  • You have to select partition key by keeping in mind that it should not produce ProvisionedThroghputExceeded exception. If we select partition key for user data stream as city_id and if 80% of user belongs to the same city then ProvisionedThroghputExceeded exception can be thrown. 
  • ProvisionedThroghputExceeded will occur when coming data overflows limit of shard limit.
We can use AWS CLI, AWS SDK or some producer library for this APIs.

AWS Kinesis Firehose:
  • It is used when you have to store a stream of data to a destination like AWS S3, Redshift, Splunk or AWS Elastic Search.
  • You can save data by compression but destination should be S3.
  • You can transform unstructured data by AWS lambda and then save it into a destination.
  • If the data stream is used as a source of firehose then firehose will not save intermediate data but it will encrypt the data first and when it is reached to data firehose delivery stream it will decrypt.
  • It is a nearly real-time service (i.e 60 Sec latency). 
  • It also uses a mechanism of shards. So you can ingest up to 1 MB/sec or 1000 records/sec data by using 1 Shard.
  • It allows loading data into AWS S3, Elastic Search, Splunk or Redshift.
  • It supports type conversion i.e. JSON to parquet or orc file. But type conversion will charge more.
  • Pay only what amount of data passed through Firehose.
AWS Kinesis Analytics:
  • It facilitates to perform real-time analytics using SQL.
  • Serverless Architecture
  • Auto-scaling
  • Pay only for actual consumption rate.
  • Can create out of real-time queries.
  • You can use either of Kinesis data stream or Kinesis Firehose as a source.
  • You can perform aggregation or filter over real-time data.
  • You can store output data to S3, Redshift, Splunk, Elastic Search or also further processed by processing engine like a spark.
  • It includes in-application input and output stream which act as a table. Stream data can be in JSON or CSV format.
  • You can also add reference object from S3 object for input.



Comments

Post a Comment

Popular posts from this blog

AWS IOT Thing Job

AWS IOT Thing Job AWS Iot Thing Job Creation, new job notification, start job and update the job after downloading firmware through JAVA SDK with UI in JAVAFX | Presigned S3 URL creation This Application is made for firmware download. Refer to this GIT repository:  AWS IOT POC A repository contains 3 projects: Aws-Pre-Signed-Url-Generation: To generate presigned url and use it into job document. NOTE: AWS CLI should be configured Iot-Create-Thing-Job-App: To create iot thing job with UI. NOTE: Access key and secret key should be mentioned in aws-iot-sdk-samples.properties Iot-Start-Update-Thing-Job-App: To get notification for new job and to start job and then get job document from aws. After getting thing job document, it will download firmware zip from mention url and update the status of job to SUCCEDED or FAILED. NOTE: aws-iot-sdk-samples.properties files properties should be mention as per your aws account. JOB Document: sample-job-document.json { "ope

AWS IOT JITR (Just in Time registration) with Thing and Policy creation using JAVA

AWS IOT JITR with Thing and Policy creation using JAVA. This POC will provide Just In Time Registration (JITR) of custom certificate and Thing creation with connect policy for AWS IOT Devices. You just need to add name of thing in common name while creation of device certificate and thing will be created with attached policy & certificate and common name as thing name. Project Overview: Get certificate details from certificate id. Parse certificate details and get common name from certificate. Creates IOT policy having action of connect. Creates IOT thing with name from certificate common name. Attach policy and thing to certificate. Activate Certificate. Now your device can connect to AWS using this custom certificate. Step for JITR & Thing creation Create CA Certificate: openssl genrsa -out CACertificate.key 2048 openssl req -x509 -new -nodes -key CACertificate.key -sha256 -days 365 -out CACertificate.pem Enter necessary details like city, country, et