分別對(duì)應(yīng)的知識(shí)點(diǎn)為:
1.爬出一個(gè)頁(yè)面下的基礎(chǔ)數(shù)據(jù).
2.通過(guò)爬到的數(shù)據(jù)進(jìn)行二次爬取.
3.通過(guò)循環(huán)對(duì)網(wǎng)頁(yè)進(jìn)行所有數(shù)據(jù)的爬取.
話不多說(shuō),現(xiàn)在開(kāi)干.
通過(guò)對(duì)新聞欄目的源代碼分析,我們發(fā)現(xiàn)所抓數(shù)據(jù)的結(jié)構(gòu)為
那么我們只需要將爬蟲(chóng)的選擇器定位到(li:newsinfo_box_cf),再進(jìn)行for循環(huán)抓取即可.
import scrapyclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): url = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())
測(cè)試,通過(guò)!
現(xiàn)在我獲得了一組URL,現(xiàn)在我需要進(jìn)入到每一個(gè)URL中抓取我所需要的標(biāo)題,時(shí)間和內(nèi)容,代碼實(shí)現(xiàn)也挺簡(jiǎn)單,只需要在原有代碼抓到一個(gè)URL時(shí)進(jìn)入該URL并且抓取相應(yīng)的數(shù)據(jù)即可.所以,我只需要再寫(xiě)一個(gè)進(jìn)入新聞詳情頁(yè)的抓取方法,并且使用scapy.request調(diào)用即可.
#進(jìn)入新聞詳情頁(yè)的抓取方法 def parse_dir_contents(self, response):item = GgglxyItem()item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first()item['href'] = responseitem['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']")item['content'] = data[0].xpath('string(.)').extract()[0] yield item
整合進(jìn)原有代碼后,有:
import scrapyfrom ggglxy.items import GgglxyItemclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): url = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())#調(diào)用新聞抓取方法yield scrapy.Request(url, callback=self.parse_dir_contents)#進(jìn)入新聞詳情頁(yè)的抓取方法 def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0]yield item
測(cè)試,通過(guò)!
這時(shí)我們加一個(gè)循環(huán):
NEXT_PAGE_NUM = 1 NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11:next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUM yield scrapy.Request(next_url, callback=self.parse)
加入到原本代碼:
import scrapyfrom ggglxy.items import GgglxyItem NEXT_PAGE_NUM = 1class News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): URL = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())yield scrapy.Request(URL, callback=self.parse_dir_contents)global NEXT_PAGE_NUM NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11: next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUMyield scrapy.Request(next_url, callback=self.parse) def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0] yield item
測(cè)試:
抓到的數(shù)量為191,但是我們看官網(wǎng)發(fā)現(xiàn)有193條新聞,少了兩條.
為啥呢?我們注意到log的error有兩條:
定位問(wèn)題:原來(lái)發(fā)現(xiàn),學(xué)院的新聞欄目還有兩條隱藏的二級(jí)欄目:
比如:
對(duì)應(yīng)的URL為
URL都長(zhǎng)的不一樣,難怪抓不到了!
那么我們還得為這兩條二級(jí)欄目的URL設(shè)定專門的規(guī)則,只需要加入判斷是否為二級(jí)欄目:
if URL.find('type') != -1: yield scrapy.Request(URL, callback=self.parse)
組裝原函數(shù):
import scrapy from ggglxy.items import GgglxyItem NEXT_PAGE_NUM = 1class News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): URL = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())if URL.find('type') != -1:yield scrapy.Request(URL, callback=self.parse)yield scrapy.Request(URL, callback=self.parse_dir_contents) global NEXT_PAGE_NUM NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11: next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUMyield scrapy.Request(next_url, callback=self.parse) def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0] yield item
測(cè)試:
我們發(fā)現(xiàn),抓取的數(shù)據(jù)由以前的193條增加到了238條,log里面也沒(méi)有error了,說(shuō)明我們的抓取規(guī)則OK!
scrapy crawl news_info_2 -o 0016.json
聲明:本網(wǎng)頁(yè)內(nèi)容旨在傳播知識(shí),若有侵權(quán)等問(wèn)題請(qǐng)及時(shí)與本網(wǎng)聯(lián)系,我們將在第一時(shí)間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com