Skip to main content

AWS IOT JITR (Just in Time registration) with Thing and Policy creation using JAVA

AWS IOT JITR with Thing and Policy creation using JAVA.

This POC will provide Just In Time Registration (JITR) of custom certificate and Thing creation with connect policy for AWS IOT Devices. You just need to add name of thing in common name while creation of device certificate and thing will be created with attached policy & certificate and common name as thing name.

Project Overview:

  1. Get certificate details from certificate id.
  2. Parse certificate details and get common name from certificate.
  3. Creates IOT policy having action of connect.
  4. Creates IOT thing with name from certificate common name.
  5. Attach policy and thing to certificate.
  6. Activate Certificate.
  7. Now your device can connect to AWS using this custom certificate.

Step for JITR & Thing creation

Create CA Certificate:

  1. openssl genrsa -out CACertificate.key 2048
  2. openssl req -x509 -new -nodes -key CACertificate.key -sha256 -days 365 -out CACertificate.pem
  • Enter necessary details like city, country, etc.

Create private key verification certificate using CA certificate

  1. aws iot get-registration-code
  • This registration code will be used in following step
  1. openssl genrsa -out privateKeyVerification.key 2048
  2. openssl req -new -key privateKeyVerification.key -out privateKeyVerification.csr
  • Enter necessary details. Note: In common name, Enter registration code which copied before.
  1. openssl x509 -req -in privateKeyVerification.csr -CA CACertificate.pem -CAkey CACertificate.key -CAcreateserial -out privateKeyVerification.crt -days 365 -sha256
  • After this step, CA certificate will create. Now We need to register it to AWS.

Register & Activate Certificate to AWS

  1. aws iot register-ca-certificate --ca-certificate file://CACertificate.pem --verification-certificate file://privateKeyVerification.crt
  • This will output like: { "certificateArn": "< certificateArn >", "certificateId": "< certificateId >" }
  1. certId=< output certificateId >
    • Assign certificateId to certId variable.
  2. aws iot describe-ca-certificate --certificate-id $certId
    • It will give output with certificate details.
  3. aws iot update-ca-certificate --certificate-id $certId --new-status ACTIVE
    • It will activate CA certificate. You can do it from AWS console also.
  4. aws iot update-ca-certificate --certificate-id $certId --new-auto-registration-status ENABLE
    • It will enable auto registration of certificate.

Device Certificate Registration

  1. To verify certificate registration request subscribe to the following topic from AWS IOT core Test.
    • $aws/events/certificates/registered/#
    • you can skip this step.
  2. openssl genrsa -out device.key 2048
  3. openssl req -new -key device.key -out device.csr
  4. openssl x509 -req -in device.csr -CA CACertificate.pem -CAkey CACertificate.key -CAcreateserial -out device.crt -days 365 -sha256
  5. cat device.crt CACertificate.pem > deviceAndCACert.crt

Create AWS lamda function and IAM role.

  • create AWS lambda function with JAVA 8 and with IAM role which having IOT Full Access policy. Create JAR from this project using mvn clean install and upload JAR to Lambda.
  • When new device will send request with new certificate which signed with registered CA certificate then it will send MQTT message to $aws/events/certificates/registered/# topic.
  • We need to create IOT rule from Act section to trigger lambda function when new certification registered message will trigger.
    • Add rule query statement SELECT * FROM '$aws/events/certificates/registered/#' and action as Send message to Lambda function by selecting created Lambda function.

Test

  1. mosquitto_pub --cafile root.cert --cert deviceAndCACert.crt --key device.key -h -p 8883 -q 1 -t foo/bar -i anyclientID --tls-version tlsv1.2 -m "RegisterCertificateAndCreateThingAndPolicy" -d
    • Note: you need mosquito client to use above command. You can get endpoint by: aws iot describe-endpoint. you can get root.cert from here.
    • Now you can verify that new thing is created. You will also see that this thing and policy attached to new certificate. Certificate will be marked as active.
    • I added policy with just only for connect action. You can change policy as per your requirement like publish, subscribe, etc.

Comments

Post a Comment

Popular posts from this blog

CCA 175 Preparation with this 6 practice questions- Try to solve it in 1 hour

As I found very few practice questions available on the internet for CCA 175 - Hadoop & spark developer exam. I set 6 questions exam with the solution provided in the comment section.  If you complete it in less than 1 hour then and then think to apply CCA 175 exam else you need more practice. Question's prerequisites : import data from orders table with parquet file format and save data to hdfs path: /user/rj_example/parquet_data/orders import data from customers table and save data to hdfs path: /user/rj_example/data/customers import data from customers table with avro file format and save data to hdfs path: /user/rj_example/avro_data/customers import data from customers table and save data to hdfs path: /user/rj_example/data/categories import data from products table and save data to hdfs path: /user/rj_example/data/products with '\t' as fields seperator create local dir 'rj_example' copy data from /user/rj_example/data/products to local dir ...

AWS Kinesis - Stream, Firehose, Analytics Overview

AWS Kinesis: AWS Kinesis is managed alternative of Apache Kafka. It can be used for big data real-time stream processing. It can be used for applications logs, metrics, forecast data, IoT. It can be used for streaming processing framework like Spark, NiFi, etc.   Kinesis Capabilities: Kinesis Streams : Streaming data ingest at scale with low latency. It is a data stream. Kinesis Analytics : To perform analytics on real-time streaming data using SQL. You can filter or aggregate data in real time. Kinesis Firehose : To load streams of data into S3, Redshift, Splunk or Elastic Search. It is a delivery stream. Kinesis Data Streams : Streams are divided into shards. To scale up application we can update shard configuration by increasing number of shards. By default shard's data can be retained for 1 Day but you can extend it for 7 days. Multiple application can use same stream. Real-time processing of data with a scale of throughput. Record size shoul...