![lambda website monitor lambda website monitor](https://img.site24x7static.com/images/aws-lambda-metrics.png)
![lambda website monitor lambda website monitor](http://lambda-electronic.de/lambda/01.jpg)
You can also use the alias on the Makefile with make logs. With the deploy done, it is possible to see the logs of the application either on AWS CloudWatch or by the serverless CLI, by typing serverless logs -f requestUrl -t. You can also use the alias of the makefile with make deploy This will also start the cronjob function to be ran on a set interval defined on config.js. Run the command serverless deploy, so both functions will be deployed on AWS Lambda. This is where you can configure the rate on which the functions will be called, the list of urls, the AWS region and also the application stage (dev or prod). Alternatively, you can do the following:Ĭreate a new file config.js based on the existing config.js.dist. Now that you have all the requirements, you can simply run the command make build, which will create a new config file, deploy and read the logs. This can be done with the commands:Įxport AWS_SECRET_ACCESS_KEY= This can be installed with npm install -g serverless or by following their documentation. It is necessary to have the Serverless CLI tool. This project uses the Serverless Framework to deploy Lambda function, which requires Node.js v6.5.0 or later. But this could be easily extended to search for regular expressions if needed. server_error: connection could not be made, so this should be read as unknown.Ĭurrently this project searches the string within the HTML with a simple indexOf, since it only needs to find strings within an HTML.connection_refused: connection was refused by the server. bad_request: response status code is not between 200 and 300.not_fulfilled: response status code is between 200 and 300, but the required content was not found on the page.success: response status code is between 200 and 300, and the required content was found on the page. This second Lambda function sends a GET request to the chosen URL and searches for the string pattern defined on config.js, and after that it will be saved on AWS CloudWatch as a log containing information about the time this request took,the url and the status. The scrape function is called every defined set interval (as a cron job), where it will read from the configuration file the list of urls and, for each, it will call the requestUrl function. This project works by using two AWS Lambda functions: scrape and requestUrl. The list of websites should be set on the config.js file. This is a project that allows web administrators to check and report the availability of websites, by checking if the request is accessible and if it contains some required string.#Lambda website monitor install#
#Lambda website monitor code#