By Saw

2018-07-14 09:41:28 8 Comments

I've used the CrawlSpider successfully before. But when I changed the code in order to integrate with Redis and add my own middlewares to set UserAgent and cookies, the spider doesn't parse the responses anymore, and thus the spider doesn't generate new requests, the spider closed soon after beginning.

Here's the running stats

Even if I code this: def parse_start_url(self, response): return self.parse_item(response) It only parses the response from first url

Here's my code: Spider:

# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from yydzh.items import YydzhItem
from scrapy.spiders import Rule, CrawlSpider

class YydzhSpider(CrawlSpider):
    name = 'yydzhSpider'
    allowed_domains = ['']
    start_urls = ['']
    rules = (
         callback='parse_item', follow=True,

#def parse_start_url(self, response):
#   return self.parse_item(response)

def parse_item(self, response):
    item = YydzhItem()
    for each in response.xpath \
        item['title'] = each.xpath("./td[2]/h3[1]/a//text()").extract()[0]
        item['author'] = each.xpath('./td[3]/a//text()').extract()[0]
        item['category'] = each.xpath('./td[2]/span[1]//text()').extract()[0]
        item['url'] = each.xpath("./td[2]/h3[1]//a/@href").extract()[0]
        yield item

Settings I think crucial:

SCHEDULER = "scrapy_redis.scheduler.Scheduler"
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
'yydzh.middlewares.UserAgentmiddleware': 500,
'yydzh.middlewares.CookieMiddleware': 600

Middleware: UserAgentmiddleware changes the user agent randomly to avoid being noticed by the server

CookieMiddleware adds the cookies to request for pages that ask for log-in to scan

logger = logging.getLogger(__name__)

class UserAgentmiddleware(UserAgentMiddleware):

def process_request(self, request, spider):
    agent = random.choice(agents)
    request.headers["User-Agent"] = agent

class CookieMiddleware(RetryMiddleware):

def __init__(self, settings, crawler):
    RetryMiddleware.__init__(self, settings)
    self.rconn = redis.Redis(host=REDIS_HOST, port=REDIS_PORT,
                             password=REDIS_PASS, db=1, decode_responses=True)  

def from_crawler(cls, crawler):
    return cls(crawler.settings, crawler)

def process_request(self, request, spider):
    redisKeys = self.rconn.keys()
    while len(redisKeys) > 0:
        elem = random.choice(redisKeys)
        if + ':Cookies' in elem:
            cookie = json.loads(self.rconn.get(elem))
            request.cookies = cookie
            request.meta["accountText"] = elem.split("Cookies:")[-1]

def process_response(self, request, response, spider):
    if('您没有登录或者您没有权限访问此页面' in str(response.body)):
        accountText = request.meta["accountText"]
        remove_cookie(self.rconn,, accountText)
        update_cookie(self.rconn,, accountText)
        logger.warning("更新Cookie成功!(账号为:%s)" % accountText)
        return request

    return response


@user10084120 2018-07-15 12:40:35

Find the problem: All the urls have been filtered by Redis server before previous requests, and restart it can solve the problem.

Related Questions

Sponsored Content

25 Answered Questions

[SOLVED] How do I parse a string to a float or int in Python?

9 Answered Questions

[SOLVED] Parsing values from a JSON file?

  • 2010-05-14 15:54:20
  • michele
  • 2334491 View
  • 1286 Score
  • 9 Answer
  • Tags:   python json parsing

1 Answered Questions

[SOLVED] AJAX pagination Web crawl in Python using Scrapy

1 Answered Questions

[SOLVED] Scrapy - Understanding CrawlSpider and LinkExtractor

2 Answered Questions

[SOLVED] Scrapy crawl resume does not crawl anything and just finishes

1 Answered Questions

[SOLVED] Value Errors When Retrieving Images With Scrapy

1 Answered Questions

1 Answered Questions

2 Answered Questions

[SOLVED] How to add Headers to Scrapy CrawlSpider Requests?

  • 2013-01-08 16:58:44
  • CatShoes
  • 15296 View
  • 5 Score
  • 2 Answer
  • Tags:   python scrapy

2 Answered Questions

[SOLVED] Scrapy: crawlspider not generating all links in nested callbacks

Sponsored Content