crawling-and-serving Akamai Task
Find a file
2023-04-18 11:46:57 +03:00
src fixing bugs 2023-04-18 11:46:57 +03:00
test first commit 2023-04-16 23:00:55 +03:00
.eslintrc.js first commit 2023-04-16 23:00:55 +03:00
.gitignore poppeteer added 2023-04-17 19:45:40 +03:00
.prettierrc first commit 2023-04-16 23:00:55 +03:00
nest-cli.json first commit 2023-04-16 23:00:55 +03:00
package-lock.json axios added + downloaded stylesheets 2023-04-18 10:56:09 +03:00
package.json axios added + downloaded stylesheets 2023-04-18 10:56:09 +03:00
README.md README.md 2023-04-17 19:51:48 +03:00
tsconfig.build.json first commit 2023-04-16 23:00:55 +03:00
tsconfig.json first commit 2023-04-16 23:00:55 +03:00

Crawing & Serving

The crawler is a simple crawler that crawls the web and stores the results in a database and assets in a file system. The server is a simple server that serves the results of the crawler.

Crawler

Usage

Post a JSON object to the crawler with the following format:

domain.com/crawl { "url": "http://www.example.com", }

The crawler will then crawl the given url and store the results in a database and assets in a file system crawler_assests/www.example.com/.