Hi everyone, I'm Ivan Fardin and I'm a student of the MSc in Engineering in Computer Science at Sapienza University of Rome and this project has been developed as part of the Internet of Things 19-20 course.
In this article I'll show you how to set up an IoT MQTT cloud-based system about a Human Activity Recognition web app, using values collected by the accelerometer sensor of the users' devices according to the crowd-sensing technique.
DemoOverviewThe project consists of:
- a web app which collects and display data from the accelerator sensor of the user's mobile phone
- an IoT MQTT cloud-based backend system implemented using AWS IoT
Assuming a movement of at most 0.5 Hz (i.e. 30 steps per minute), a sampling frequency of 1 Hz (i.e. 1 message per second) is theoretically sufficient to recognize if the user is standing still or not.
All the code is available in the GitHub repository.
ArchitectureLet's start going into detail by analyzing each component in the picture.
The web app is accessible by the user through a browser on the following website and is OS independent, so, different devices as mobile phones, tablets, desktop and laptops can be connected to the backend. It makes use of the Generic Sensor API to collect data from the accelerator sensor of the mobile device. Generic Sensor API is a sensor framework that exposes sensor devices to the web platform in a secure context (i.e. HTTPS).
Sensor values or the resulting human activities are transmitted to the cloud infrastructure using a unique ID (identity), associated with the access to the website, via MQTT, a lightweight and widely adopted messaging protocol designed for constrained devices, on a specific topic.
MQTT is implemented using the Eclipse Paho JavaScript Client, an MQTT browser-based client library written in Javascript that uses WebSockets (a communications protocol, providing full-duplex communication channels over a single TCP/IP connection) to connect to an MQTT Broker.
Why MQTT over WebSocket? Since the app is a web one, it lives in a browser and MQTT over WebSocket allows to send and receive MQTT data directly into a web browser.
Messages sent via Paho depend on the selected mode by the user to recognize the activity and they are:
- raw data if the human activity recognition model is executed by the cloud
- the resulting human activities if the human activity recognition model is executed by the user's device
The backend is implemented using AWS IoT which provides secure, bi-directional communication between Internet-connected devices and the AWS Cloud. The communication between the devices and the AWS Cloud is handled by the AWS IoT message broker according to the publish-subscribe pattern.
The AWS IoT message broker provides a secure mechanism for devices and AWS IoT applications to publish and receive messages from each other connecting AWS IoT clients by sending messages from publishing clients to subscribing clients. Clients send data by publishing a message on a topic and receive messages by subscribing to a topic. When the message broker receives a message from a publishing client, it forwards the message to all clients that have subscribed to that topic.
Moreover, AWS IoT offers the possibility to create rules that define one or more actions to perform based on the topic of an MQTT message. In this way, I implemented a cloud computation by creating a rule in which the broker forwards all the incoming messages from the devices of the web app to a specified AWS Lambda function. This elaborates the incoming data and inserts them into an AWS DynamoDB table which represents the persistent layer of the architecture.
In the end, the web app is both connected to the AWS DynamoDB service to retrieve and display devices data and resulting activities of the last hour, according to the selected mode, and to the broker to send data and, if the cloud computing is enabled, to receive real-time data from the Lambda function.
AWS ConfigurationFirst you need to create an AWS account if you do not have one. As a student I have an AWS educate one that offers at no-cost a limited access to cloud resources.
If like me you have an AWS educate account ensure that the selected region is us-east-1 (North Virginia), the only one available for this type of account, otherwise the backend will not work.
Once you signed up or logged in, move on to the next section.
DynamoDBA good starting point of the backend implementation may be the creation of an AWS DynamoDB table.
So, in the AWS console find the DynamoDB service and click on it. Then click on Create table and fill out the form as follow, then press Create.
For consistency with the code in the web app section, I will use Id and dateTime but of course you can put whatever attribute as partition and sort key, as well as decide to not use a sort key.
When the table is created, you can get the associated ARN scrolling the Overview tab
By clicking on the Items tab you can see which elements are present in the table.
Since the sampling frequency of the accelerometer sensor is 1 Hz and values are immediately transmitted for real-time recognition, the size of the table will hugely increase with usage. This may be a performance problem but remembering the functionalities of the web app I need persistence only to show data of the last hour, so I can delete them after this interval of time to improve performance. In order to do this, DynamoDB offers the Time to Live (TTL) functionality which allows to define when items in a table expire so that they can be automatically deleted from the database.
Therefore, in the Items tab click on Actions and Manage TTL.
Here put the attribute you decide to use as TTL and then click Continue
Nothing could be simpler, your DB is ready.
LambdaSince your DB is ready, let's see how to populate it. Remember that when an MQTT message arrives at the AWS IoT Broker, this checks if the topic matches the one you will choose to invoke your Lambda function. AWS Lambda is a compute service that lets you run code without provisioning or managing servers and executes your code only when needed and scales automatically.
To create a Lambda function search and select the Lambda service in the AWS console.
In its homepage click Create function
then choose a name for it, the author from scratch option, since my implementation is very easy, and the programming language (in my case I chose Python 3.7). In the end, click Create function
Fine, now you have to enter code in the editor that:
- in case of edge computing will insert an item in the DB corresponding to the incoming MQTT message
- in case of cloud computing will execute the model for the Human Activity Recognition according to values in the incoming MQTT message, insert a corresponding item in the DB and send an MQTT message back to the client for a real-time result
import json
import boto3
import time
dynamodb = boto3.resource('dynamodb', region_name='<your-region>')
table = dynamodb.Table('<your-table>')
def lambda_handler(event, context):
# Data expires after 1 hour (3600 s) and few minutes (400 s)
# Time in Unix epoch
ttl = int(time.time()) + 4000
# Check if edge computation
if "isStanding" in event:
table.put_item(
Item={
"Id": event["clientID"],
"dateTime": event["dateTime"],
"isStanding": event["isStanding"],
"computation": "edge",
"TTL": ttl
}
);
# Else cloud computation
else:
isStanding = True
x2 = event["x2"]
y2 = event["y2"]
z2 = event["z2"]
if((abs(x2 - event["x1"])*0.67 > 0.292) or
(abs(y2 - event["y1"])*0.7 > 0.145) or
(abs(z2 - event["z1"])*0.67 > 0.45)):
isStanding = False
clientID = event["clientID"]
table.put_item(
Item={
"Id": clientID,
"dateTime": event["dateTime"],
"x": str(x2), # DynamoDB does not support float type
"y": str(y2),
"z": str(z2),
"isStanding": isStanding,
"computation": "cloud",
"TTL": ttl
}
);
client = boto3.client('iot-data', region_name='<your-region>')
# Change topic, qos and payload
response = client.publish(
topic = "<your-cloud-computing-response-topic>" + clientID,
qos = 1,
payload = json.dumps({
"x": x2,
"y": y2,
"z": z2,
"isStanding": isStanding
})
)
Cognito and IAMNow, you need to create an AWS Cognito Identity Poll that will grant users access to the DynamoDB service and MQTT operations with AWS IoT.
In the AWS console search and select the Cognito service.
In its homepage press the Create Identity Poll button and fill out the form as follow, then click on Create Pool and next on Allow.
Hence, click on the Sample code tab to view your pool ID.
Now move to the IAM service page and click Roles
Here search the new unauthorized Cognito role just created and click on it.
Then press the Attach policies button
and search for DynamoDBFullAccess, select it and press the Attach policy button.
Your Cognito Identity Poll is now ready to be used to access your DynamoDB.
To complete you have to create and attach an IoT policy in order to perform the connection, subscription, publishing, etc... operations of MQTT messages.
To create the policy click again on the Attach policies button and then on Createpolicy (the pictures of the IAM steps 3 and 4 may be useful).
Here move on the JSON tab
and in the editor paste the following JSON replacing the region, account ID and topic fields with yours
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:<your-region>:<your-account-id>:client/${iot:ClientId}"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:<your-region>:<your-account-id>:topic/<your-topic>"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe"
],
"Resource": [
"arn:aws:iot:<your-region>:<your-account-id>:topicfilter/<your-topic>"
]
}
]
}
Then, press on Review policy and insert the policy name and description and click Create policy.
Eventually repeat IAM steps 3 and 4 to attach this new policy to the unauthorized role.
You are done, your Cognito Identity Poll is ready to be used to access both your DynamoDB and IoT Core.
IoT CoreYou have almost finished to set up your backend, you just miss to create a thingobject using AWS IoT Core.
In the AWS console find the IoT Core service and click on it.
Here you have to connect your device to the platform, so go to Onboard and click on Get started in Onboard a device.
Then select how you are connecting to AWS IoT, in this case I use the Linux platform and Java programming language and press Next.
Insert the name of your thing and go ahead
Then download the certificate and the private and public keys and go to the next step.
Here will you display a tutorial to configure and test your device, hence press Done. Your thing has been created :)
After, take note of your region and account ID from the Amazon Resource Name (ARN) that uniquely identifies your thing.
ARNs have the following general formats:
- arn:partition:service:region:account-id:resource-id
- arn:partition:service:region:account-id:resource-type/resource-id
- arn:partition:service:region:account-id:resource-type:resource-id
So, from the IoT Core initial page go to Things, choose the new created one and click on it.
Click on Settings and take note of your custom endpoint that allows you to connect to AWS IoT via REST API (it will be very useful later).
Now you need to edit the policy that was generated when you created your thing in order to allow your client to connect and communicate with the broker via MQTT.
Hence, from the IoT Core initial page go to Secure and then to Policies and click on the policy associated with your thing
Here click on Edit policydocument and reuse the JSON used in the IAM console replacing the region and account ID fields with the ones that you've taken before
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:<your-region>:<your-account-id>:client/${iot:ClientId}"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:<your-region>:<your-account-id>:topic/<your-topic>"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe"
],
"Resource": [
"arn:aws:iot:<your-region>:<your-account-id>:topicfilter/<your-topic>"
]
}
]
}
Where:
- the ${iot:ClientId} variable represents the client ID used to connect to the AWS IoT Core message broker
- * is a wildcard for topic names equivalent to # wildcard in MQTT protocol
For more info about IoT policies see the relative documentation.
Now you have to add a rule to the broker to invoke the Lambda function defined before when an incoming message is published on a specified topic. Take note of the topic, you will use it later.
In order to do that go to Act and press the Create a rule button. Then fill out the form with the rule name and description and the SQL query that will filter the incoming messages on topic
Scroll the page and press the Add action button to set an action for the rule
and select Send a message to a Lambda function.
Then click on Configure action at the bottom of the page and select the previous created function
Finally, click Add action and then Create rule to conclude: your backend is finally ready :)
Once you have set up your cloud-based backend, it's time to move on the web app development that has a double role: producing and showing data. If edge computing is selected, it must also process data locally.
To achieve this you need some basics in HTML5 and Javascript (optionally also in CSS).
Let's start from the index.html file that contains the elements on which Javascript will work.
First, you have to import the optional CSS file styles.css and the necessary SDK and scripts including those for AWS and Paho MQTT.
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/pixeden-stroke-7-icon@1.2.3/pe-icon-7-stroke/dist/pe-icon-7-stroke.min.css">
<link rel="stylesheet" href="css/styles.css">
<script src="js/main.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<!-- For charts -->
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.1"></script>
<script src="https://cdn.jsdelivr.net/npm/hammerjs@2.0.8"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-zoom@0.7.7"></script>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.7.16.min.js"></script>
<script src="js/aws.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/paho-mqtt/1.0.1/mqttws31.min.js" type="text/javascript"></script>
<script src="js/pahoMQTTClient.js"></script>
</head>
Then, on the body section of my implementation, I used two buttons to allow user choosing between Cloud and Edge Computing, another one to retrieve last hour data
<p id="btns" class="buttonsAnimation">
<span id="cloudBtn" class="btn btn-primary btn-lg btn-custom">Cloud Computing</span>
<span id="edgeBtn" class="btn btn-primary btn-lg btn-custom">Edge Computing</span>
</p>
<p id="historyBtn" class="hide">
<span class="btn btn-secondary btn-lg btn-custom">Show last hour values</span>
</p>
a paragraph to show the Human Activity Recognition result and the accelerometer values in case of cloud computing
<p id="measure" class="measure hide"></p>
and four cards to show the last hour values and activities of the following form (with different IDs of course)
<div id="historyChart" class="chart hide">
<div class="card">
<div class="card-body">
<!-- Card header -->
<div class="stat-widget-five">
<div class="stat-icon dib green-color">
<i class="pe-7s-graph2"></i>
</div>
<div class="stat-content">
<div class="dib">
<div class="stat-text">History Chart</div>
</div>
</div>
</div>
<!-- /Card header -->
<canvas id="historyCanvas"></canvas>
</div>
</div>
</div>
...
</html>
I omitted some elements that are not fundamental to succeed in the task but that you can find in my GitHub repository if interested.
Well, now it's time to move on the logic of the web app which consists of three Javascript files:
- aws.js
- pahoMQTTClient.js
- main.js
The first one contains two functions to use MQTT over the WebSocket protocol in a web application with AWS IoT Core in order to specify credentials using AWS Signature Version 4.
The second one is a Javascript class that performs the connection to the server using WebSockets. Moreover, it offers the functions to subscribe or unsubscribe to an MQTT topic, to publish and receive MQTT messages and to disconnect from the server. The implementation is very easy and more info about Paho MQTT can be read on the official page.
The class constructor has two parameters in input:
- a request URL at which performing the WebSocket connection
- a client ID that uniquely identifies the client in the MQTT channel.
class PahoMQTTClient {
constructor(requestUrl, clientId) {
this.requestUrl = requestUrl;
this.clientId = clientId;
this.client = null;
this.isConnected = false;
}
...
}
In order to connect the client to the server, I wrote the following function which receives as parameters two callbacks that will be invoked when the connection succeeds or fails and when an MQTT message is received after subscriptions to a topic. The connection is performed using SSL and the more recent version of MQTT available for AWS IoT. Notice that in the connect options is also specified the callback for an incoming message after subscription to which I added my receive callback.
// Connect to the server
conn(callbackConnection, callbackReceive) {
this.client = new Paho.MQTT.Client(this.requestUrl, this.clientId);
var connectOptions = {
onSuccess: () => {
// Connect succeeded
console.log("onConnect: connect succeeded");
// In an arrow function "this" represents the owner of the function
// while in a regular function "this" represents the object that calls the function
this.isConnected = true;
callbackConnection();
},
useSSL: true,
timeout: 3,
mqttVersion: 4,
onFailure: function() {
// Connect failed
console.log("onFailure: connect failed");
callbackConnection();
}
};
// Set callback handlers
this.client.onConnectionLost = onConnectionLost;
this.client.onMessageArrived = onMessageArrived;
// Connect the client
this.client.connect(connectOptions);
// Called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0)
console.log("onConnectionLost:" + responseObject.errorMessage);
}
// Called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:" + message.payloadString);
callbackReceive(message.payloadString);
}
}
The subscription and unsubscription are very trivial functions
// Subscribe to a topic
sub(topic) {
console.log("Subscribing on topic " + topic);
this.client.subscribe(topic);
}
// Unsubscribe to a topic
unsub(topic) {
console.log("Unsubscribing on topic " + topic);
this.client.unsubscribe(topic);
}
as well as the publish one
// Publish a message on a topic
pub(message, topic) {
console.log("Publishing message on topic " + topic);
// QOS = 0 => Best effort, retained = false => Message delivered only to current subscriptions
this.client.send(topic, message, 0, false);
}
The last step we miss connecting with the AWS backend is providing the credentials. We can either directly use IAM ones
var host = "<your-endpoint>", region = "<your-region>";
var credentials = new AWS.Credentials("<sessionID>", "<sessionKey>", "<sessionToken>");
var requestUrl = SigV4Utils.getSignedUrl(host, region, credentials);
or Cognito
// Initialize Amazon Cognito credentials provider
AWS.config.region = '<your-region>';
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: '<your-id>',
});
// Obtain credentials
AWS.config.credentials.get(function(){
// Credentials will be available when this function is called
var host = "<your-host>";
var requestUrl = SigV4Utils.getSignedUrl(host, AWS.config.region, AWS.config.credentials);
...
});
Which one to choose? Cognito of course because I never want to expose my credentials.
So in main.js file, when the credentials are obtained, the MQTT connection over WebSocket is performed using the functions of the above-explained Javascript files
var uuid = createUUID();
var mqttClient = null;
// Initialize Amazon Cognito credentials provider
AWS.config.region = '<your-region>';
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: '<your-id>',
});
// Obtain credentials
AWS.config.credentials.get(function(){
// Credentials will be available when this function is called
var host = "<your-host>";
var requestUrl = SigV4Utils.getSignedUrl(host, AWS.config.region, AWS.config.credentials);
mqttClient = new PahoMQTTClient(requestUrl, uuid);
mqttClient.conn(callbackConnection, callbackReceive);
});
The two callback functions are very simple but are necessary since the operations are asynchronous. The first notifies the global environment that the attempt of connection is concluded while the second one is used to receive MQTT data to be displayed to the user in real-time when cloud computing is enabled.
// Callback for the MQTT client connection
function callbackConnection() {
fired = true;
// Check if the callback is invoked after that a button is pressed (slow connection)
if(cloudBtnActivated || edgeBtnActivated)
main();
}
// Callback for the MQTT client when a message is received after subscription
function callbackReceive(msg) {
if(!cloudBtnActivated) return;
msg = JSON.parse(msg);
setMeasureText('x: ' + msg.x + '<br>y: ' + msg.y + '<br>z: ' + msg.z + "<br>" + (msg.isStanding ? "You're standing" : "You're moving"));
}
where cloudBtnActivated and edgeBtnActivated are two global flags to denote if the respectively the cloud computing button or the edge computing one is activated or not.
A crucial software component of the web app is constituted by the Generic Sensors API which allows to collect data from the accelerometer sensor of the mobile device of the user when the context is secure (HTTPS).
The usage is very simple but to read the sensor we need the permission of the user, so first check if it's granted
navigator.permissions.query({ name: "accelerometer" }).then(result => {
if (result.state != 'granted') {
setMeasureText("Sorry, we're not allowed to access sensors on your device");
return;
}
start();
}).catch(err => {
setMeasureText("Integration with Permissions API is not enabled");
});
If successful, call the start function which will create the Javascript object from which reading data
var accelerometer = null;
var topic = "<your-topic>" + uuid;
function start() {
if(accelerometer != null) {
accelerometer.start();
return;
}
try {
// For cloud computing
mqttClient.sub("<your-cloud-computing-response-topic>" + uuid);
// Read once per second
accelerometer = new Accelerometer({ frequency: 1 });
accelerometer.addEventListener('error', errorListener);
accelerometer.addEventListener('reading', readListener);
accelerometer.start();
} catch (error) {
// Handle construction errors
setMeasureText(error.message);
}
}
As explained in the Architecture section I sample with a frequency of 1 Hz and immediately subscribe to the topic to receive cloud computing result messages. Notice that the try block of code is executed once in the entire life cycle of the app to avoid multiple creations.
The error listener is very easy; it simply displays an error message in the area where the HAR result should be shown
function errorListener(event) {
// Handle runtime errors
setMeasureText(event.error.message);
}
The read listener is instead more interesting because it contains the processing of sensor values and the subsequent display of the HAR result in the case of edge computing, in addition to the sending of MQTT messages in both modes.
function readListener(event) {
var now = { x:event.target.x, y:event.target.y, z:event.target.z };
values.push(now);
// Cloud-based Deployment
if(cloudBtnActivated && values.length > 1) {
mqttClient.pub(createJsonString(values), topic);
values.shift();
}
// Edge-based Deployment
else if(edgeBtnActivated && values.length > 1) {
var check = isStanding(now);
if(check)
setMeasureText('x: ' + event.target.x + '<br>y: ' + event.target.y + '<br>z: ' + event.target.z + "<br>You're standing");
else
setMeasureText('x: ' + event.target.x + '<br>y: ' + event.target.y + '<br>z: ' + event.target.z + "<br>You're moving");
mqttClient.pub(createJsonString(check), topic);
}
}
To stop the reading I wrote a simple function that doesn't eliminate the sensor object (remember the start function) but cleans the area dedicated to displaying the HAR result.
function stop() {
if(accelerometer != null)
accelerometer.stop();
setMeasureText("");
}
Well, let's see the most awaited part of the code the HAR model. Actually, you have already seen it while writing the Lambda function but I haven't said anything about it. How does it work? It does a simple check on two successive measures to detect if the user is moving or not. If the movement exceeds a specified threshold, the result is a movement otherwise not. Thresholds and normalization factors are computed empirically to try to minimize noise and results are pretty good.
// Check if the user is standing (do side effect on values array removing the old element)
function isStanding(now) {
var before = values.shift();
// One decides for all
if((Math.abs(now.x - before.x)*0.67 > 0.292)
|| (Math.abs(now.y - before.y)*0.7 > 0.145)
|| (Math.abs(now.z - before.z)*0.67 > 0.45))
return false;
return true;
}
Last but not least, to see last hour's activities (and sensor values if cloud computing is enabled), you can click Show last hour values that will perform a query to the DB to retrieve data and then properly display them via zoomable and pannable charts. These are realized using the Chart.js library, Hammer.js one for gestures recognition and the chartjs-plugin-zoom for zooming and panning.
The connection with DynamoDB is performed by modifying the previous credentials block as follows
var uuid = createUUID();
var mqttClient = null;
// Initialize Amazon Cognito credentials provider
AWS.config.region = '<your-region>';
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: '<your-id>',
});
// Obtain credentials
AWS.config.credentials.get(function(){
// Credentials will be available when this function is called
var host = "<your-host>";
var requestUrl = SigV4Utils.getSignedUrl(host, AWS.config.region, AWS.config.credentials);
mqttClient = new PahoMQTTClient(requestUrl, uuid);
mqttClient.conn(callbackConnection, callbackReceive);
});
var docClient = new AWS.DynamoDB.DocumentClient();
and since DynamoDB is a NoSQL one you have to use the following syntax
var params = {
TableName : "<your-table-name>",
ProjectionExpression: "<your-attribute1-to-project>, ..., <your-attributeN-to-project>",
KeyConditionExpression: "<expression-on-key-attributes>",
FilterExpression: "<expression-on-key-attributes>",
ExpressionAttributeNames:{
"<attribute1-substitution>": "<attribute1>",
...,
"<attributeM-substitution>": "<attributeM>"
},
ExpressionAttributeValues: {
"<attribute1-value-substitution>": "<attribute1>",
...,
"<attributeL-value-substitution>": "<attributeL>"
}
};
where:
- KeyConditionExpression specifies the search criteria: a string that determines the items to be read from the table or index. You must specify the partition key name and value as an equality condition
- FilterExpression determines which items within the query results should be returned to you
- ExpressionAttributeNames provides name substitution. This is used because some words are reserved in Amazon DynamoDB
- ExpressionAttributeValues provides value substitution. This is used because you can't use literals in any expression, including KeyConditionExpression
- N >= M and N >= L
For more details consult the AWS DynamoDB Documentation.
Hence to satisfy the feature, the query is the following
var params = {
TableName : "<your-table-name>",
ProjectionExpression: cloudBtnActivated ? "Id, #dt, x, y, z, isStanding" : "Id, #dt, isStanding",
KeyConditionExpression: "Id = :clientID and #dt between :start_h and :end_h",
FilterExpression: "computation = :computation",
ExpressionAttributeNames:{
"#dt": "dateTime"
},
ExpressionAttributeValues: {
":clientID": uuid,
":start_h": dateTime[1],
":end_h": dateTime[0],
":computation": cloudBtnActivated ? "cloud" : "edge"
}
};
docClient.query(params, function(err, data) {
if (err) {
// Error
} else {
// Success, do stuff
}
});
where cloudBtnActivated is a global flag to denote if the cloud computing button is activated or not. It's sufficient to separate the functionalities since they are mutually exclusive (i.e. you cannot use both at the same time).
Warning: as partition key I used Id while as sort key dateTime, they must be consistent with those you defined in your DynamoDB table. So, if you used different ones, replace my keys with yours in the code.
If successful, you have to iterate the response object to retrieve data and display them
var sensorValues = [];
var dateTimes = [];
// Additional arrays for cloud computing
var xValues = [];
var yValues = [];
var zValues = [];
// Check if cloud computing
if(cloudBtnActivated)
data.Items.forEach(function(data) {
sensorValues.push(data.isStanding ? 0 : 1);
// DynamoDB does not support float type so, in the table, the value is stored as string
xValues.push(parseFloat(data.x));
yValues.push(parseFloat(data.y));
zValues.push(parseFloat(data.z));
dateTimes.push(data.dateTime);
});
else
data.Items.forEach(function(data) {
sensorValues.push(data.isStanding ? 0 : 1);
dateTimes.push(data.dateTime);
});
// Continue scanning if we have more data (per scan 1MB limitation)
if (typeof data.LastEvaluatedKey != "undefined") {
params.ExclusiveStartKey = data.LastEvaluatedKey;
docClient.scan(params, onScan);
}
if (sensorValues.length == 0)
setTimeout(function() {
alert("No data sent in the past hour");
}, 100);
else {
if(cloudBtnActivated) {
drawLineChart(dateTimes, sensorValues, "historyCanvas", "Activity Cloud Computing");
drawLineChart(dateTimes, xValues, "historyXCanvas", "x Cloud Computing");
drawLineChart(dateTimes, yValues, "historyYCanvas", "y Cloud Computing");
drawLineChart(dateTimes, zValues, "historyZCanvas", "z Cloud Computing");
}
else
drawLineChart(dateTimes, sensorValues, "historyCanvas", "Activity Edge Computing");
}
For further details on not focused code see my GitHub repository.
That's all, enjoy it and if you appreciate my work let me know with a like or a comment. Thank you!
Comments