Due: Thursday, March 8, 2018 at 11:59 p.m.
Please turn in your answers for the homework assignment on Canvas under Extra Credit #3 there.
A web crawler is a program that starts at a web site. It then follows the links on that page recursively. If you think of the links as branches of a tree, and the original web site as the root, the crawler follows the branches to the next “root” (i.e., web site) and then repeats this process some number of times (the depth).
Your job is to write a simple web crawler. We will do this in 2 steps — that makes it less overwhelming. Also, we will focus only on one type of link, that which begins with “http:”.
All links will look like this: href="link’s url" so you have to extract the ‘link’s url” part. You can do this in two ways. First, you can use the string method find() to look for “href="”, and then for the closing “"”, and extract the string between the two. Or, you can use the regular expression package. To do this, import “re” and then use the line:
where webpagetextstring is the contents of the web page you are checking. This returns a list of links, for example
Print the links in the following form (this is a partial listing only):
http://nob.cs.ucdavis.edu/mhi289i/index.html contains:or, if there are no links, print:
http://nob.cs.ucdavis.edu/secure-exer/index.html contains no links
Call this program “crawler1.py” when you submit it.
A good web site to test your program on is http://nob.cs.ucdavis.edu/mhi289i/index.edu.
Hint: Use a dictionary for this. The key would be the URL of the current web page and the value would be the list of links. Then, when you visit a web page, check that its URL is not in the dictionary. If it is, you already visited it and all its links, so simply return.
Call this program “crawler2.py” when you submit it.
Last modified: Version of February 28, 2018 at 1:03PM|
Winter Quarter 2018
You can get a PDF version of this