Techno Life, Micromax New Version Official Firmware, Winmax official firmware, Nokia Flash File, WhatsApp Messenger, Lenovo, huawei, xiaomi, samsung galaxy

Cari Blog Ini

Diberdayakan oleh Blogger.
Kamis, 31 Desember 2015

HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python)

HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python) - these days more and more blogs that talk about technology, including blogs Techno Life but we are very experienced in this case because we have been there from a few years ago always present information about techno for you, well now we are about HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python) hopefully the information we will convey can answer your question to google, please see:

Articles : HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python)
full Link : HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python)

You can also see our article on:


HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python)

Websites are made of web pages. Every webpage is made of html tags. And every webpage has a unique address called URL which stands for Uniform Resource Locator. Using which we can request the webpage html code from the server. Once we send the request, the server will give us the html file for the webpage and our browser translates the code into readable English. This is the process that takes place every time we browse the internet.

In today’s post, we will see how to do that without the browser and rather through python. And also we will see how to grab all the links present on a website. A link is another URL. There can be many uses of doing this. For example, a web crawler does this all the time. It goes through the html code of a website and copies the links. And it visits those individual links and looks for other links in that page too. This is how the Google indexes pages. It crawls the whole of the internet every day.
Python is very easy, as is this process. So let’s get started with the program. For this program we will need two modules:
  • Urllib2
  • Beautifulsoup4

Urllib2 is used for sending GET requests to the server where the website resides.  This will allow us to locally save the HTML code of the webpage on our own machine.

BeautifulSoup4 is used for finding specific elements present in the HTML code. It allows us to isolate any specific tag like 'a' or 'div' or any other tag. It also lets us play around with its contents.

Steps:

  1. First of all we are going to import both libraries into our program.
  2. Then we will get the HTML code of the desired webpage using urllib2.urlopen() function.
  3. Then we are going to parse the HTML code using BeautifulSoup() function.
  4. Once we have the beautifulsoup object ready, it is very easy to fetch all the links. All the links are written inside the anchor tag .
  5. To get all the anchor tags, we use the function find_all(‘a’) which comes with beautifulsoup. This will give us a list of all the anchor tags.
  6. But we don’t want the whole anchor tag. We only need the URLs. For  that we can use the function get(‘href’) which also is a function of beautifulsoup. It will fetch all the URLs from their respective tags.
  7. We make a list of all such URLs on the page and we are done.

This will only crawl the page specified. If you want to crawl the whole website, the process should be repeated for each link in the list you have made. That will take a lot of time.

Also not all the links in the anchor tags will be complete links. Some of them will not have the domain name mentioned in them. For example, instead of   http://lifeandeating.blogspot.com /2015/12/checking-for-email-address-validity.html">http://lifeandeating.blogspot.com/2015/12/checking-for-email-address-validity.html, only http://lifeandeating.blogspot.com /2015/12/checking-for-email-address-validity.html">/2015/12/checking-for-email-address-validity.html will be given. So to access them we will have to prefix them with the domain name of the website starting with http://.

All theses links can be copied to a file for future use of can be used directly used in this program itself.

This is the code for the program:


This is what the output would look like:


Pretty easy isn’t it?

That’s all for today. If you have any questions, please ask in the comments below or make use of the ‘contacts us’ button.

Don’t forget to subscribe to the newsletter to receive future such posts right to your inbox.

Thank you.



information HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python) has been discussed

hopefully the meaning of HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python) that we write can be useful for you in adding insight about gadgets.

you just finished reading the article titled HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python) if you want to bookmark or share your links use the link https://lifeandeating.blogspot.com/2015/12/how-to-extract-all-links-of-given.html do not forget to go back to this blog if you want to get the latest information about gadgets.

Tag :
Share on Facebook
Share on Twitter
Share on Google+
Tags :

Related : HOW TO EXTRACT ALL THE LINKS of A GIVEN WEBPAGE (with python)

0 komentar:

Posting Komentar