• A web crawler refers to a program that automatically retrieves information from the World Wide Web according to certain rules.
Request the specified URL to retrieve the response body.
Requests Module: Reading Website Files
• To systematically automate the collection of information from the internet, the first step is to be able to extract webpage
content or files from websites for further processing.
• Python provides a "requests" module that allows users to make requests to websites and obtain the response content
using a simple and readable syntax.
•you can install it using: "pip install -U requests".
W.H. SUNG changed a year agoView mode Like Bookmark
LabelImg
Support Yolo
download: https://github.com/HumanSignal/labelImg/releases
Prepare Dataset
When using deep learning to implement image object detection, many images
are required for training. The higher the image pixel, the better, the more
objects we can be labeled, and the greater the flexibility for subsequent use.
In preparing images, we recommend paying attention to image diversity and
differences. If we want to use a photo of citrus for identification, We need to
W.H. SUNG changed a year agoView mode Like Bookmark