crawling-and-serving Akamai Task
Find a file
2023-04-19 10:15:19 +03:00
src make sure there is an sites_assets 2023-04-19 10:15:19 +03:00
test first commit 2023-04-16 23:00:55 +03:00
.eslintrc.js first commit 2023-04-16 23:00:55 +03:00
.gitignore installing mongoose + continuing to rest api 2023-04-19 01:47:05 +03:00
.prettierrc first commit 2023-04-16 23:00:55 +03:00
docker-compose.yaml small push 2023-04-18 15:54:49 +03:00
Dockerfile fixing docker-compose 2023-04-18 20:57:18 +03:00
nest-cli.json first commit 2023-04-16 23:00:55 +03:00
package-lock.json installing mongoose + continuing to rest api 2023-04-19 01:47:05 +03:00
package.json installing mongoose + continuing to rest api 2023-04-19 01:47:05 +03:00
README.md README.md 2023-04-17 19:51:48 +03:00
tsconfig.build.json first commit 2023-04-16 23:00:55 +03:00
tsconfig.json first commit 2023-04-16 23:00:55 +03:00

Crawing & Serving

The crawler is a simple crawler that crawls the web and stores the results in a database and assets in a file system. The server is a simple server that serves the results of the crawler.

Crawler

Usage

Post a JSON object to the crawler with the following format:

domain.com/crawl { "url": "http://www.example.com", }

The crawler will then crawl the given url and store the results in a database and assets in a file system crawler_assests/www.example.com/.