crawling-and-serving Akamai Task
Find a file
2023-04-19 10:53:12 +03:00
src adding API usage to README 2023-04-19 10:53:12 +03:00
test first commit 2023-04-16 23:00:55 +03:00
.eslintrc.js first commit 2023-04-16 23:00:55 +03:00
.gitignore installing mongoose + continuing to rest api 2023-04-19 01:47:05 +03:00
.prettierrc first commit 2023-04-16 23:00:55 +03:00
docker-compose.yaml adding API usage to README 2023-04-19 10:53:12 +03:00
Dockerfile adding API usage to README 2023-04-19 10:53:12 +03:00
nest-cli.json first commit 2023-04-16 23:00:55 +03:00
package-lock.json installing mongoose + continuing to rest api 2023-04-19 01:47:05 +03:00
package.json installing mongoose + continuing to rest api 2023-04-19 01:47:05 +03:00
README.md adding API usage to README 2023-04-19 10:53:12 +03:00
tsconfig.build.json first commit 2023-04-16 23:00:55 +03:00
tsconfig.json first commit 2023-04-16 23:00:55 +03:00

Crawing & Serving

The crawler is a simple crawler that crawls the web and stores the results in a database and assets in a file system. The server is a simple server that serves the results of the crawler.

Crawler

Usage

Post a JSON object to the crawler with the following format:

domain.com/crawl { "url": "http://www.example.com", }

The crawler will then crawl the given url and store the results in a database and assets in a file system crawler_assests/www.example.com/.

API

The API is a simple API that serves the results of the crawler.

Routes

GET

/sites - Returns a list of all sites
/sites/:id - Returns the site object for the given site Id
sites/domain/:domain - Returns the domain object for the given domain

DELETE

/sites/:id - Deletes the site object for the given site Id
sites/domain/:domain - Deletes the domain object for the given domain

Post

sites/:id - Updates the site object for the given site Id