By Jeff


2011-07-04 00:09:17 8 Comments

So, my problem is relatively simple. I have one spider crawling multiple sites, and I need it to return the data in the order I write it in my code. It's posted below.

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from mlbodds.items import MlboddsItem

class MLBoddsSpider(BaseSpider):
   name = "sbrforum.com"
   allowed_domains = ["sbrforum.com"]
   start_urls = [
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110328/",
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110329/",
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110330/"
   ]

   def parse(self, response):
       hxs = HtmlXPathSelector(response)
       sites = hxs.select('//div[@id="col_3"]//div[@id="module3_1"]//div[@id="moduleData4952"]')
       items = []
       for site in sites:
           item = MlboddsItem()
           item['header'] = site.select('//div[@class="scoreboard-bar"]//h2//span[position()>1]//text()').extract()# | /*//table[position()<2]//tr//th[@colspan="2"]//text()').extract()
           item['game1'] = site.select('/*//table[position()=1]//tr//td[@class="tbl-odds-c2"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c4"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c6"]//text()').extract()
           items.append(item)
       return items

The results are returned in a random order, for example it returns the 29th, then the 28th, then the 30th. I've tried changing the scheduler order from DFO to BFO, just in case that was the problem, but that didn't change anything.

Thanks in advance.

10 comments

@Higor Sigaki 2018-02-06 17:40:52

There is a much easier way to make scrapy follow the order of starts_url: you can just uncomment and change the concurrent requests in settings.py to 1.

Configure maximum concurrent requests performed by Scrapy (default: 16) 
CONCURRENT_REQUESTS = 1

@Annerose N 2018-06-25 17:46:15

Or you add: custom_settings = { 'CONCURRENT_REQUESTS': '1' } right below class DmozSpider(BaseSpider): name = "dmoz" . This way you don't need an extra settings.py file.

@Higor Sigaki 2018-06-26 18:50:18

the <code> settings.py<\code> is a default file in the structure of scrapy, not an extra file.

@Leon Hu 2017-12-22 13:16:10

Personally I like @user1460015's implementation after I managed to have my own work around solution.

My solution is to use subprocess of Python to call scrapy url by url until all urls have been took care of.

In my code, if user does not specify he/she wants to parse the urls sequentially, we can start the spider in a normal way.

process = CrawlerProcess({'USER_AGENT': 'Mozilla/4.0 (compatible; \
    MSIE 7.0; Windows NT 5.1)'})
process.crawl(Spider, url = args.url)
process.start()

If a user specifies it needs to be done sequentially, we can do this:

for url in urls:
    process = subprocess.Popen('scrapy runspider scrapper.py -a url='\
        + url + ' -o ' + outputfile)
    process.wait()

Note that: this implementation does not handle errors.

@Sandeep Balagopal 2014-10-23 08:35:18

Scrapy 'Request' has a priority attribute now.http://doc.scrapy.org/en/latest/topics/request-response.html#request-objects If you have many 'Request' in a function and want to process a particular request first, you can set

def parse(self,response): url = http://www.example.com/first yield Request(url=url,callback = self.parse_data,priority=1) url = http://www.example.com/second yield Request(url=url,callback = self.parse_data)

Scrapy will process the one with priority 1 first.

@warvariuc 2011-07-06 08:18:38

start_urls defines urls which are used in start_requests method. Your parse method is called with a response for each start urls when the page is downloaded. But you cannot control loading times - the first start url might come the last to parse.

A solution -- override start_requests method and add to generated requests a meta with priority key. In parse extract this priority value and add it to the item. In the pipeline do something based in this value. (I don't know why and where you need these urls to be processed in this order).

Or make it kind of synchronous -- store these start urls somewhere. Put in start_urls the first of them. In parse process the first response and yield the item(s), then take next url from your storage and make a request for it with callback for parse.

@Jeff 2011-07-06 11:38:31

All great feedback, thanks everyone for the help. This one got closest to what I wanted to do.

@Prakhar Mohan Srivastava 2015-01-19 11:26:52

I have a related question. Suppose I want to specify a list of URLs such that the first is the homepage of a website and the next is a list of webpages. How do I go about it?

@warvariuc 2015-01-19 16:18:24

@PrakharMohanSrivastava, put them in start_urls?

@user1460015 2012-06-27 22:54:20

The solution is sequential.
This solution is similar to @wuliang

I started with @Alexis de Tréglodé method but reached a problem:
The fact that your start_requests() method returns a list of URLS
return [ Request(url = start_url) for start_url in start_urls ]
is causing the output to be non-sequential (asynchronous)

If the return is a single response then by creating an alternative other_urls can fulfill the requirements. Also, other_urls can be used to add-into URLs scraped from other webpages.

from scrapy import log
from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy.selector import HtmlXPathSelector
from practice.items import MlboddsItem

log.start()

class PracticeSpider(BaseSpider):
    name = "sbrforum.com"
    allowed_domains = ["sbrforum.com"]

    other_urls = [
            "http://www.sbrforum.com/mlb-baseball/odds-scores/20110328/",
            "http://www.sbrforum.com/mlb-baseball/odds-scores/20110329/",
            "http://www.sbrforum.com/mlb-baseball/odds-scores/20110330/",
           ]

    def start_requests(self):
        log.msg('Starting Crawl!', level=log.INFO)
        start_urls = "http://www.sbrforum.com/mlb-baseball/odds-scores/20110327/"
        return [Request(start_urls, meta={'items': []})]

    def parse(self, response):
        log.msg("Begin Parsing", level=log.INFO)
        log.msg("Response from: %s" % response.url, level=log.INFO)
        hxs = HtmlXPathSelector(response)
        sites = hxs.select("//*[@id='moduleData8460']")
        items = response.meta['items']
        for site in sites:
            item = MlboddsItem()
            item['header'] = site.select('//div[@class="scoreboard-bar"]//h2//span[position()>1]//text()').extract()
            item['game1'] = site.select('/*//table[position()=1]//tr//td[@class="tbl-odds-c2"]//text()').extract()
            items.append(item)

        # here we .pop(0) the next URL in line
        if self.other_urls:
            return Request(self.other_urls.pop(0), meta={'items': items})

        return items

@wuliang 2012-04-18 07:33:10

Off course, you can control it. The top secret is the method how to feed the greedy Engine/Schedulor. You requirement is just a little one. Please see I add a list named "task_urls".

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from dirbot.items import Website

class DmozSpider(BaseSpider):
   name = "dmoz"
   allowed_domains = ["sbrforum.com"]
   start_urls = [
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110328/",
   ]
   task_urls = [
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110328/",
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110329/",
       "http://www.sbrforum.com/mlb-baseball/odds-scores/20110330/"
   ]
   def parse(self, response): 

       hxs = HtmlXPathSelector(response)
       sites = hxs.select('//div[@id="col_3"]//div[@id="module3_1"]//div[@id="moduleData4952"]')
       items = []
       for site in sites:
           item = Website()
           item['header'] = site.select('//div[@class="scoreboard-bar"]//h2//span[position()>1]//text()').extract()# | /*//table[position()<2]//tr//th[@colspan="2"]//text()').extract()
           item['game1'] = site.select('/*//table[position()=1]//tr//td[@class="tbl-odds-c2"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c4"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c6"]//text()').extract()
           items.append(item)
       # Here we feed add new request
       self.task_urls.remove(response.url)
       if self.task_urls:
           r = Request(url=self.task_urls[0], callback=self.parse)
           items.append(r)

       return items

If you want some more complicated case, please see my project: https://github.com/wuliang/TiebaPostGrabber

@Alexis 2012-02-07 12:57:03

The google group discussion suggests using priority attribute in Request object. Scrapy guarantees the urls are crawled in DFO by default. But it does not ensure that the urls are visited in the order they were yielded within your parse callback.

Instead of yielding Request objects you want to return an array of Requests from which objects will be popped till it is empty.

Can you try something like that?

from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy.selector import HtmlXPathSelector
from mlbodds.items import MlboddsItem

class MLBoddsSpider(BaseSpider):
   name = "sbrforum.com"
   allowed_domains = ["sbrforum.com"]

   def start_requests(self):
       start_urls = reversed( [
           "http://www.sbrforum.com/mlb-baseball/odds-scores/20110328/",
           "http://www.sbrforum.com/mlb-baseball/odds-scores/20110329/",
           "http://www.sbrforum.com/mlb-baseball/odds-scores/20110330/"
       ] )

       return [ Request(url = start_url) for start_url in start_urls ]

   def parse(self, response):
       hxs = HtmlXPathSelector(response)
       sites = hxs.select('//div[@id="col_3"]//div[@id="module3_1"]//div[@id="moduleData4952"]')
       items = []
       for site in sites:
           item = MlboddsItem()
           item['header'] = site.select('//div[@class="scoreboard-bar"]//h2//span[position()>1]//text()').extract()# | /*//table[position()<2]//tr//th[@colspan="2"]//text()').extract()
           item['game1'] = site.select('/*//table[position()=1]//tr//td[@class="tbl-odds-c2"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c4"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c6"]//text()').extract()
           items.append(item)
       return items

@user 2011-07-04 06:00:40

I doubt if it's possible to achieve what you want unless you play with scrapy internals. There are some similar discussions on scrapy google groups e.g.

http://groups.google.com/group/scrapy-users/browse_thread/thread/25da0a888ac19a9/1f72594b6db059f4?lnk=gst

One thing that can also help is setting CONCURRENT_REQUESTS_PER_SPIDER to 1, but it won't completely ensure the order either because the downloader has its own local queue for performance reasons, so the best you can do is prioritize the requests but not ensure its exact order.

@emish 2011-07-04 03:26:45

I believe the

hxs.select('...')

you make will scrape the data from the site in the order it appears. Either that or scrapy is going through your start_urls in an arbitrary order. To force it to go through them in a predefined order, and mind you, this won't work if you need to crawl more sites, then you can try this:

start_urls = ["url1.html"]

def parse1(self, response):
    hxs = HtmlXPathSelector(response)
   sites = hxs.select('blah')
   items = []
   for site in sites:
       item = MlboddsItem()
       item['header'] = site.select('blah')
       item['game1'] = site.select('blah')
       items.append(item)
   return items.append(Request('url2.html', callback=self.parse2))

then write a parse2 that does the same thing but appends a Request for url3.html with callback=self.parse3. This is horrible coding style, but I'm just throwing it out in case you need a quick hack.

@Jan Z 2011-07-04 02:15:15

Disclaimer: haven't worked with scrapy specifically

The scraper may be queueing and requeueing requests based on timeouts and HTTP errors, it would be a lot easier if you can get at the date from the response page?

I.e. add another hxs.select statement that grabs the date (just had a look, it is definitely in the response data), and add that to the item dict, sort items based on that.

This is probably a more robust approach, rather than relying on order of scrapes...

Related Questions

Sponsored Content

3 Answered Questions

Scrapy - order of crawled urls

  • 2018-04-18 09:29:24
  • ann
  • 31 View
  • -1 Score
  • 3 Answer
  • Tags:   python scrapy

1 Answered Questions

scrapy crawl urls retrieved

  • 2018-04-13 06:58:50
  • Samuel M.
  • 65 View
  • 0 Score
  • 1 Answer
  • Tags:   python scrapy

4 Answered Questions

[SOLVED] Crawling with an authenticated session in Scrapy

  • 2011-05-01 20:34:32
  • Herman Schaaf
  • 30035 View
  • 26 Score
  • 4 Answer
  • Tags:   python scrapy

1 Answered Questions

1 Answered Questions

[SOLVED] Scrapy Not Crawling in DFS Order

1 Answered Questions

[SOLVED] Crawling redirected urls with scrapy

  • 2016-11-27 06:53:50
  • user3702643
  • 128 View
  • 1 Score
  • 1 Answer
  • Tags:   python scrapy

1 Answered Questions

[SOLVED] scrapy crawling nested urls

  • 2016-05-20 11:18:13
  • robinma
  • 411 View
  • 0 Score
  • 1 Answer
  • Tags:   python scrapy

2 Answered Questions

[SOLVED] How to preserve insertion order in HashMap?

  • 2012-05-22 21:08:23
  • realtebo
  • 293739 View
  • 415 Score
  • 2 Answer
  • Tags:   java hashmap order

1 Answered Questions

Scrapy crawl in order

  • 2014-07-31 10:58:27
  • user3887640
  • 725 View
  • 0 Score
  • 1 Answer
  • Tags:   python scrapy

1 Answered Questions

[SOLVED] Order of crawling in Scrapy

  • 2011-12-04 22:24:18
  • Siddharth
  • 2080 View
  • 1 Score
  • 1 Answer
  • Tags:   python scrapy

Sponsored Content