By xxxxxxx

2008-11-04 01:25:26 8 Comments

Please advise how to scrape AJAX pages.


@hekimgil 2017-12-07 13:17:42

Selenium WebDriver is a good solution: you program a browser and you automate what needs to be done in the browser. Browsers (Chrome, Firefox, etc) provide their own drivers that work with Selenium. Since it works as an automated REAL browser, the pages (including javascript and Ajax) get loaded as they do with a human using that browser.

The downside is that it is slow (since you would most probably like to wait for all images and scripts to load before you do your scraping on that single page).

@Deepan Prabhu Babu 2011-06-26 12:49:09

I have previously linked to MIT's solvent and EnvJS as my answers to scrape off Ajax pages. These projects seem no longer accessible.

Out of sheer necessity, I have invented another way to actually scrape off Ajax pages, and it has worked for tough sites like findthecompany which have methods to find headless javascript engines and show no data.

The technique is to use chrome extensions to do scraping. Chrome extensions are the best place to scrape off Ajax pages because they actually allow us access to javascript modified DOM. The technique is as follows, I will certainly open source the code in sometime. Create a chrome extension ( assuming you know how to create one, and its architecture and capabilities. This is easy to learn and practice as there are lots of samples),

  1. Use content scripts to access the DOM, by using xpath. Pretty much get the entire list or table or dynamically rendered content using xpath into a variable as string HTML Nodes. ( Only content scripts can access DOM but they can't contact a URL using XMLHTTP )
  2. From content script, using message passing, message the entire stripped DOM as string, to a background script. ( Background scripts can talk to URLs but can't touch the DOM ). We use message passing to get these to talk.
  3. You can use various events to loop through web pages and pass each stripped HTML Node content to the background script.
  4. Now use the background script, to talk to an external server (on localhost), a simple one created using Nodejs/python. Just send the entire HTML Nodes as string, to the server, where the server would just persist the content posted to it, into files, with appropriate variables to identify page numbers or URLs.
  5. Now you have scraped AJAX content ( HTML Nodes as string ), but these are partial html nodes. Now you can use your favorite XPATH library to load these into memory and use XPATH to scrape information into Tables or text.

Please comment if you cant understand and I can write it better. ( first attempt ). Also, I am trying to release sample code as soon as possible.

@Michael 2013-07-14 09:09:12

I think Brian R. Bondy's answer is useful when the source code is easy to read. I prefer an easy way using tools like Wireshark or HttpAnalyzer to capture the packet and get the url from the "Host" field and the "GET" field.

For example,I capture a packet like the following:

GET /hqzx/quote.aspx?type=3&market=1&sorttype=3&updown=up&page=1&count=8&time=164330 
Accept: */*
Accept-Language: zh-cn
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Connection: Keep-Alive

Then the URL is :

@TTT 2015-10-16 15:27:39

I like PhearJS, but that might be partially because I built it.

That said, it's a service you run in the background that speaks HTTP(S) and renders pages as JSON for you, including any metadata you might need.

@mattspain 2014-02-09 00:25:13

In my opinion the simpliest solution is to use Casperjs, a framework based on the WebKit headless browser phantomjs.

The whole page is loaded, and it's very easy to scrape any ajax-related data. You can check this basic tutorial to learn Automating & Scraping with PhantomJS and CasperJS

You can also give a look at this example code, on how to scrape google suggests keywords :

/*global casper:true*/
var casper = require('casper').create();
var suggestions = [];
var word = casper.cli.get(0);

if (!word) {
    casper.echo('please provide a word').exit(1);

casper.start('', function() {
    this.sendKeys('input[name=q]', word);

casper.waitFor(function() {
  return this.fetchText('.gsq_a table span').indexOf(word) === 0
}, function() {
  suggestions = this.evaluate(function() {
      var nodes = document.querySelectorAll('.gsq_a table span');
      return [], function(node){
          return node.textContent;
}); {

@er.irfankhan11 2015-09-03 07:03:23

But how to use with PHP?

@mattspain 2015-09-05 22:17:46

You launch it with shell_exec. No other choice.

@sw. 2013-05-09 18:21:02

The best way to scrape web pages using Ajax or in general pages using Javascript is with a browser itself or a headless browser (a browser without GUI). Currently phantomjs is a well promoted headless browser using WebKit. An alternative that I used with success is HtmlUnit (in Java or .NET via IKVM, which is a simulated browser. Another known alternative is using a web automation tool like Selenium.

I wrote many articles about this subject like web scraping Ajax and Javascript sites and automated browserless OAuth authentication for Twitter. At the end of the first article there are a lot of extra resources that I have been compiling since 2011.

@Alex 2011-04-11 15:04:13

As a low cost solution you can also try SWExplorerAutomation (SWEA). The program creates an automation API for any Web application developed with HTML, DHTML or AJAX.

@Brian R. Bondy 2008-11-04 02:24:00


All screen scraping first requires manual review of the page you want to extract resources from. When dealing with AJAX you usually just need to analyze a bit more than just simply the HTML.

When dealing with AJAX this just means that the value you want is not in the initial HTML document that you requested, but that javascript will be exectued which asks the server for the extra information you want.

You can therefore usually simply analyze the javascript and see which request the javascript makes and just call this URL instead from the start.


Take this as an example, assume the page you want to scrape from has the following script:

<script type="text/javascript">
function ajaxFunction()
var xmlHttp;
  // Firefox, Opera 8.0+, Safari
  xmlHttp=new XMLHttpRequest();
catch (e)
  // Internet Explorer
    xmlHttp=new ActiveXObject("Msxml2.XMLHTTP");
  catch (e)
      xmlHttp=new ActiveXObject("Microsoft.XMLHTTP");
    catch (e)
      alert("Your browser does not support AJAX!");
      return false;

Then all you need to do is instead do an HTTP request to time.asp of the same server instead. Example from w3schools.

Advanced scraping with C++:

For complex usage, and if you're using C++ you could also consider using the firefox javascript engine SpiderMonkey to execute the javascript on a page.

Advanced scraping with Java:

For complex usage, and if you're using Java you could also consider using the firefox javascript engine for Java Rhino

Advanced scraping with .NET:

For complex usage, and if you're using .Net you could also consider using the Microsoft.vsa assembly. Recently replaced with ICodeCompiler/CodeDOM.

@brendosthoughts 2013-06-20 08:55:01

Wow, this was amazingly helpful information even with tools like phantomjs now available, knowing how to custom scrape a page using the method stated is much more convenient once you've investigated what's going on behind the scene's thanks alot Brian +1

@sblundy 2008-11-04 01:31:18

If you can get at it, try examining the DOM tree. Selenium does this as a part of testing a page. It also has functions to click buttons and follow links, which may be useful.

@Jabba 2011-04-13 02:31:49

In a selenium client script you can use the get_html_source() function but it returns the normal source, not the generated (post-AJAX) source. If you know how to access the generated source, tell us.

@wonderchook 2008-11-04 01:31:07

Depends on the ajax page. The first part of screen scraping is determining how the page works. Is there some sort of variable you can iterate through to request all the data from the page? Personally I've used Web Scraper Plus for a lot of screen scraping related tasks because it is cheap, not difficult to get started, non-programmers can get it working relatively quickly.

Side Note: Terms of Use is probably somewhere you might want to check before doing this. Depending on the site iterating through everything may raise some flags.

Related Questions

Sponsored Content

34 Answered Questions

[SOLVED] How do I return the response from an asynchronous call?

32 Answered Questions

[SOLVED] How can I upload files asynchronously?

23 Answered Questions

[SOLVED] How to make an AJAX call without jQuery?

  • 2011-12-19 20:27:46
  • discky
  • 618848 View
  • 709 Score
  • 23 Answer
  • Tags:   javascript ajax

39 Answered Questions

[SOLVED] How do I format a Microsoft JSON date?

15 Answered Questions

[SOLVED] jQuery AJAX submit form

13 Answered Questions

22 Answered Questions

[SOLVED] Wait until all jQuery Ajax requests are done?

40 Answered Questions

[SOLVED] Options for HTML scraping?

18 Answered Questions

[SOLVED] Abort Ajax requests using jQuery

31 Answered Questions

[SOLVED] How to manage a redirect request after a jQuery Ajax call

Sponsored Content