Wednesday 31 December 2014

Data Scraping Services with Proxy Data Scraping

Have you ever heard of "data scraping? Data Scraping is the process of gathering relevant information in the public domain on the internet (private areas even if the conditions are met) and stored in databases or spreadsheets for later use in various applications. Scraping data technology is not new and a successful businessman his fortune by using data scraping technology.

Sometimes owners of sites that are not derived much pleasure from the automated harvesting of their data. Webmasters have learned to deny access to web scrapers their websites using tools or methods that some IP addresses to block the content of the site here. scrapers data is left to either target a different site, or the script to move the harvest of a computer using a different IP address each time and get as much information as possible to "all computers finally blocked the nozzle.

Fortunately, there is a modern solution to this problem. Proxy data scraping technology solves the problem by using a proxy IP addresses. When your data scraping program performs an extraction of a website, the site thinks that it comes from a different IP address. For site owner, proxies just like scratching a short period of increased traffic around the world. They have very limited resources and tedious to block such a scenario, but more importantly - for the most part, they simply do not know they are scraped.

Now you can ask. "Where can I proxy data scraping technology for my project" The "do-it-yourself solution is free, unfortunately, not easy at all Creation of a database scraping proxy network takes time and requires you to either a group of IP addresses and servers can be used in place yet, the computer guru you need to call to get everything configured. You may consider hiring proxy servers hosting providers to select, but this option is usually quite expensive, but probably better than the alternative: dangerous and unreliable servers (but free) public proxy.

There are literally thousands of free proxy servers located all over the world are fairly easy to use. The trick is to find them. Hundreds of sites, list servers, but by placing a functioning, open and supports standard protocols that you need to a lesson in perseverance, trial and error will be. However, if you manage to find a working public representatives, there are dangers inherent in their use. First, you do not know who owns the server or activities taking place elsewhere on the server. Send applications or sensitive data via an open proxy is a bad idea. It's easy enough for a proxy server to keep all information you send or send it back to you to catch. If you choose the method of replacing the public, make sure you never a transaction through which you or anyone else would jeopardize the case of unsavory types are made aware of the data to send.

A less risky scenario for data scraping proxy is to hire a proxy connection that runs through the rotation of a large number of private IP addresses. There are a number of these companies available that claim to remove all Web logs, which you harvest anonymously on the web with a minimal threat of retaliation. Companies such as enterprise solutions offer a large http://www.Anonymizer.com anonymous proxy, but often carry significant costs of installing enough for you to continue.

The other advantage is that companies that own such networks can often help design and implement a set of proxy data scraping custom program instead of trying to work with a generic bone scraping. After performing a simple Google search, I quickly found a company (www.ScrapeGoat.com) that an anonymous proxy server provides for data scraping purposes. Or, according to their website, if you want to make life even easier, scrap goat can retrieve data for you and a variety of different formats to deliver, often before you could finish up your plate from the scraping program.

Whatever path you choose for your data scraping proxy need not let a few simple tips to thwart access to all the wonderful information that is stored on the World Wide Web!

Source:http://www.articlesbase.com/small-business-articles/data-scraping-services-with-proxy-data-scraping-4697825.html

Monday 29 December 2014

Web Data Scraping Services At Lowest Rate For Business Directory

We are the world's most trusted provider directory, your business data scrape, and scrape email scraping and sending the data needed. We scour the entire directory database or doctors, lawyers, brokers, financial advisers, etc. As the scraping of a particular industry category wise database scraping or data that can be adapted.

We are pioneers in the worldwide web scraping and data services. We must understand the value of our customer database, we email id with the greatest effort to collect data. We are lawyers, doctors, brokers, realtors, schools, students, universities, IT managers, pubs, bars, nightclubs, dance clubs, financial advisers, liquor stores, Face book, Twitter, pharmaceutical companies, mortgage broker scraped data, accounting firms, car dealers , artists, shop health and job portals.

Our business database development services to try and get real quality at the lowest possible industry. Example worked. We have a quick turnaround time can be a business mailing database. Our business database development services to try and get real quality at the lowest possible industry. Example worked. We have a quick turnaround time can be a business mailing database.

We are the world's most trusted provider directory, your business data scrape, and scrape email scraping and sending the data needed. We scour the entire directory database or doctors, lawyers, brokers, financial advisers, etc., as the scraping of a particular industry category wise database scraping or data that can be adapted.

We are pioneers in the worldwide web scraping and data services. We must understand the value of our customer database, we email id with the greatest effort to collect data. We are lawyers, doctors, brokers, realtors, schools, students, universities, IT managers, pubs, bars, nightclubs, dance clubs, financial advisers, liquor stores, Face book, Twitter, pharmaceutical companies, mortgage broker scraped data, accounting firms, car dealers , artists, shop health and job portals.

What a great resource for specific information or content with little success to gather and have tried to organize themselves in a folder? You no longer need to worry, and data processing services through our website search are the best solution for your problem.

We currently have an "information explosion" phase of the walk, where there is so much information and content information for an event or a small group of channels.

Order without the benefit of you and your customers a little truth to that information. You use information and material is easy to organize in a way that is needed. Something other than a small business guide, simply create a separate folder in less than an hour.

Our technology-specific Web database for you to a similar configuration and database development to use. In addition, we finished our services can help you through the data to identify the sources of information for web pages to follow. This is a cost effective way to create a database.

We offer directory database, company name, address, the state, country, phone, email and website URL to take. In recent projects we have completed. We have a quick turnaround time can be a business mailing database. Our business database development services to try and get real quality at the lowest possible industry.

Source:http://www.articlesbase.com/outsourcing-articles/web-data-scraping-services-at-lowest-rate-for-business-directory-5757029.html

Saturday 27 December 2014

Scraping By

In his classic 1976 Chesapeake portrait, Beautiful Swimmers, William Warner described the scrape boat as "a workboat unlike any other I had ever seen on the Bay." Seeming half as wide as it was long, he said, it looked like a "a miniature battleship." There's a reason for that, of course. It's a classic case of form following function; the boat evolved for one purpose, to ply the Bay's grassy shallows for shedding blue crabs.

Said to "float on a heavy dew," scrape boats run from 26 to 30 feet long and 9 to 10 feet wide. The hull is a shallow-V deadrise that quickly flattens toward the stern, enabling the boat to pull its twin scrapes—rectangular steel frames, each with a trailing mesh bag—in knee-deep waters. The broad beam might sound ungainly, but the hull tapers toward the stern—betraying its sailboat origins. And it has a graceful sheer, flowing from a bow height of a few feet to little more than a foot above the water amidships.

And you want a low freeboard when you spend the whole day hoisting aboard scrapes, which weigh 50 pounds apiece, not including the load of sea grass and crabs that come in too. Low sides or not, there's a higher than average inci-dence of back problems among scrape boat crabbers. They spend long days bending in precisely the position back doctors say puts undue pressure on the lower back as they sort through rolls of grasses to pluck out the peelers and softies. And that alone may be why crab potting is now the far more common way of catching soft crabs.

Some people think that's good, assuming that dragging a scrape across the Bay's beleaguered grass flats must be destructive. But the smooth bar of the scrape, unlike a toothed dredge, doesn't uproot grasses. In fact, where scraping is traditional, the grass beds seem relatively resilient. I've often thought if Maryland and Virginia had stuck with scraping as the major legal way to soft-crab, overfishing might not have become a problem. Pots can be deployed everywhere and by the thousands, whereas scraping is limited to grass beds and to ground covered at three miles per hour; and even the sturdiest waterman can only pull two of them by hand. But peeler pots seem here to stay, and other soft crabbers have taken to using a single, large scrape operated from larger workboats by hydraulic power.

The bottom line is that these lovely, superbly functional expressions of Chesapeake crabbing culture now number only in the dozens, if you count working, wooden models. There are some fiberglass scrape boat hulls in service, and a Carolina skiff or two has been adapted for the task. They are functional, but have little art to them.

It is probably a sign of how fast scrape boats are going that the Smithsonian Institution recently took the lines off Darlene, a scraper worked by Morris Marsh of Smith Island, for its archives. You can see photos of scrape boats, and learn more about the 140-year old history of scraping, from Paula Johnson's fine book, The Workboats of Smith Island. Mr. Marsh, still going strong in his late 60s, is the scraper who took Warner out nearly 40 years ago when he was researching Beautiful Swimmers.

Indeed, scraping seems to win over those who master it. Marsh's father-in-law, Ed Harrison, scraped for almost 70 years, nearly wearing through the cross-planked bottom of his boat—from the inside—with decades of walking the planks, tending his scrapes. And an islander who scrapes with Marsh today, David Laird, says he is 71—one year younger than Scotty Boy, the scrape boat he took over from his dad in 1958. "I wouldn't even know how to crab in another boat," Laird says.

Soft crabs may well be caught—or farmed—a century from now on the Chesapeake; but no one will devise a way to take them so intimately and beautifully from the shallowest marsh edges and tiniest crevices in the shore as the scrapers do.

Source:http://www.articlesbase.com/culture-articles/scraping-by-1560919.html

Wednesday 24 December 2014

Central Qld Coal: Mining for Needed Investments

The Central Qld Coal Project is situated in the Galilee Coal Basin, Central Queensland with the purpose of establishing a mine to service international export markets for thermal coal. An estimated cost to such a project would be around $ 7.5 billion - the amount proves that the mining industry is one serious business to begin with.

In addition to the mine, the Central Qld Coal Project also proposes to construct a railway, potentially in excess of 400km depending on the final option: Either to transport processed coal to an expanded facility at Abbot Point or new export terminal to be established at Dudgeon Point. However, this would require new major water and power supply infrastructure to service the mine and port - hence, the extremely high cost. Because mining areas usually involve desolate areas where there is no direct risk to developed regions where the populace thrives, setting up new major water and power supplies would simply demand costs as high as the estimated cost - but this is not the only major percent of the whole budget of the Central Qld Coal Project.

The location for the Central Qld Coal Project is situated 40km northwest of Alpha, approximately 450 km west of Rockhampton and contains an amount of more than three billion tons. The proposed open-cut mine of the Central Qld Coal Project is expected to be developed in stages. It shall have an initial export capacity of 30 million tons per annum with a mine life expectancy of 30 years.

In terms of employment regarding Central Qld Coal Project, there will be around a total of 2,500 people to be employed during the construction and 1,600 permanent positions shall be employed in the operation stage of the Central Qld Coal Project.

Australia is a major coal exporter - the largest exporter of coal and fourth largest producer of coal. Australia is also the second largest producer of gold, second only to China. As for Opal, Australia is responsible for 95% of its production, thereby making her the largest producer worldwide. Australia would not also lose in terms of commercially viable diamond deposits - being third next after Russia and Botswana. This pretty much explains the significance of the mining industry to Australia. It is like the backbone of its economy; an industry focused on claiming the blessings the earth has giver her lands. The Central Qld Coal Project was made to further the exports and improve the trade. However, the Central Qld Coal Project requires quite a large sum for its project. It is only through the financial support of investments, both local and international, can it achieve its goals and begin reaping the fruits of the land.

Source: http://ezinearticles.com/?Central-Qld-Coal:-Mining-for-Needed-Investments&id=6314576

Monday 22 December 2014

Scraping table from html web with CloudStat

You need to use the data from internet, but don’t type, you can just extract or scrape them if you know the web URL.

Thanks to XML package from R. It provides amazing readHTMLtable() function.

For a study case,

I want to scrape data:

    US Airline Customer Score.
    World Top Chess Players (Men).

A. Scraping US Airline Customer Score table from

http://www.theacsi.org/index.php?option=com_content&view=article&id=147&catid=&Itemid=212&i=Airlines

Code:

airline = ‘http://www.theacsi.org/index.php?option=com_content&view=article&id=147&catid=&Itemid=212&i=Airlines’

airline.table = readHTMLTable(airline, header=T, which=1,stringsAsFactors=F)

Result:

B. Scraping World Top Chess players (Men) table from http://ratings.fide.com/top.phtml?list=men

Code:

chess = ‘http://ratings.fide.com/top.phtml?list=men’

chess.table = readHTMLTable(chess, header=T, which=5,stringsAsFactors=F)

Result:

Done. You had successfully scraping data from any web page with CloudStat.

You can get the full version of this study case (code and result) at Scraping table from html web.

Then, you can analyze as usual! Great! No more retype the data. Enjoy!

Source:http://www.r-bloggers.com/scraping-table-from-html-web-with-cloudstat/

Thursday 18 December 2014

Data Extraction - A Guideline to Use Scrapping Tools Effectively

So many people around the world do not have much knowledge about these scrapping tools. In their views, mining means extracting resources from the earth. In these internet technology days, the new mined resource is data. There are so many data mining software tools are available in the internet to extract specific data from the web. Every company in the world has been dealing with tons of data, managing and converting this data into a useful form is a real hectic work for them. If this right information is not available at the right time a company will lose valuable time to making strategic decisions on this accurate information.

This type of situation will break opportunities in the present competitive market. However, in these situations, the data extraction and data mining tools will help you to take the strategic decisions in right time to reach your goals in this competitive business. There are so many advantages with these tools that you can store customer information in a sequential manner, you can know the operations of your competitors, and also you can figure out your company performance. And it is a critical job to every company to have this information at fingertips when they need this information.

To survive in this competitive business world, this data extraction and data mining are critical in operations of the company. There is a powerful tool called Website scraper used in online digital mining. With this toll, you can filter the data in internet and retrieves the information for specific needs. This scrapping tool is used in various fields and types are numerous. Research, surveillance, and the harvesting of direct marketing leads is just a few ways the website scraper assists professionals in the workplace.

Screen scrapping tool is another tool which useful to extract the data from the web. This is much helpful when you work on the internet to mine data to your local hard disks. It provides a graphical interface allowing you to designate Universal Resource Locator, data elements to be extracted, and scripting logic to traverse pages and work with mined data. You can use this tool as periodical intervals. By using this tool, you can download the database in internet to you spread sheets. The important one in scrapping tools is Data mining software, it will extract the large amount of information from the web, and it will compare that date into a useful format. This tool is used in various sectors of business, especially, for those who are creating leads, budget establishing seeing the competitors charges and analysis the trends in online. With this tool, the information is gathered and immediately uses for your business needs.

Another best scrapping tool is e mailing scrapping tool, this tool crawls the public email addresses from various web sites. You can easily from a large mailing list with this tool. You can use these mailing lists to promote your product through online and proposals sending an offer for related business and many more to do. With this toll, you can find the targeted customers towards your product or potential business parents. This will allows you to expand your business in the online market.

There are so many well established and esteemed organizations are providing these features free of cost as the trial offer to customers. If you want permanent services, you need to pay nominal fees. You can download these services from their valuable web sites also.

Source: http://ezinearticles.com/?Data-Extraction---A-Guideline-to-Use-Scrapping-Tools-Effectively&id=3600918

Tuesday 16 December 2014

Online Data Entry and Data Mining Services

Data entry job involves transcribing a particular type of data into some other form. It can be either online or offline. The input data may include printed documents like Application forms, survey forms, registration forms, handwritten documents etc.

Data entry process is an inevitable part of the job to any organization. One way or other each organization demands data entry. Data entry skills vary depends upon the nature of the job requirement, in some cases data to be entered from a hard copy formats and in some other cases data to be entered directly into a web portal. Online data entry job generally requires the data to be entered in to any online data base.

For a super market, data associate might be required to enter the goods which have sold in a particular day and the new goods received in a particular day to maintain the stock well in order. Also, by doing this the concerned authorities will get an idea about the sale particulars of each commodity as they requires. In another example, an office the account executive might be required to input the day to day expenses in to the online accounting database in order to keep the account well in order.

The aim of the data mining process is to collect the information from reliable online sources as per the requirement of the customer and convert it to a structured format for the further use. The major source of data mining is any of the internet search engine like Google, Yahoo, Bing, AOL, MSN etc. Many search engines such as Google and Bing provide customized results based on the user's activity history. Based on our keyword search, the search engine lists the details of the websites from where we can gather the details as per our requirement.

Collect the data from the online sources such as Company Name, Contact Person, Profile of the Company, Contact Phone Number of Email ID Etc. are doing for the marketing activities. Once the data is gathered from the online sources into a structured format, the marketing authorities will start their marketing promotions by calling or emailing the concerned persons, which may result to create a new customer. So basically data mining is playing a vital role in today's business expansions. By outsourcing the data entry and its related works, you can save the cost that would be incurred in setting up the necessary infrastructure and employee cost.

Source:http://ezinearticles.com/?Online-Data-Entry-and-Data-Mining-Services&id=7713395

Monday 15 December 2014

Git workflow for Scrapy projects

Our customers often ask us what’s the best workflow for working with Scrapy projects. A popular approach we have seen and used in the past is to split the spiders folder (typically project/spiders) into two folders: project/spiders_prod and project/spiders_dev, and use the SPIDER_MODULES setting to control which spiders are loaded on each environment. This works reasonably well, until you have to make changes to common code used by many spiders (ie. code outside the spiders folder), for example common base spiders.

Nowadays, DVCS (in particular, git) have become more popular and people are quite used to branching, so we recommend using a simple git workflow (similar to GitHub flow) where you branch for every change you make. You keep all changes in a branch while they’re being tested and finally merge to master when they’re finished. This means that master branch is always stable and contains only “production-ready” spiders.

If you are using our Scrapy Cloud platform, you can have 2 projects (myproject-dev, myproject-prod) and use myproject-dev to test the changes in your branch.  scrapy deploy in Scrapy 0.17 now adds the branch name to the version name (when using version=GIT or version=HG), so you can see which branch you are going to run directly on the panel. This is particularly useful with large teams working on a single Scrapy project, to avoid stepping into each other when making changes to common code.

Here is a concrete example to illustrate how this workflow works:y

•    the developer decides to work on issue 123 (could be a new spider or fixes to an existing spider)
•    the developer creates a new branch to work on the issue
•    git checkout -b issue123
•    the developer finishes working on the code and deploys to the panel (this assumes scrapy.cfg is configured with a deploy target, and using version=GIT – see here for more information)
•    scrapy deploy dev
•    the developer goes into the panel and runs the spider, where he’ll see the branch name (issue123) that will be run
•    the developer checks the scraped data looks fine through the item browser in the panel
•    whenever issues are found, the developer makes more fixes (always working on the same branch) and deploys new versions
•    once all issues are fixed, the developer merges the branch and deploys to production project
•    git checkout master
•    git merge issue123
•    git pull # make sure to pull latest code before deploying
•    scrapy deploy prod

We recommend you keep your common spiders well-tested and use Spider Contracts extensively to test your final spiders. Otherwise experience tell us that base spiders end up being copied (instead of reused) out of fear of breaking old spiders that depend on them, thus turning their maintenance into a nightmare.

Source:http://blog.scrapinghub.com/2013/03/06/git-workflow-scrapy-projects/

Friday 12 December 2014

Microfinance Data Scraping

I went to the Datakind‘s New York Datadive last November and met the Microfinance Information Exchange (MIX), a group that ‘delivers data services, analysis, research and business information on the institutions that provide financial services to the world’s poor’. They wanted to see whether web-scraping could save them from manually gathering data. So fellow divers and I showed MIX the utility of web-scraping. Over the course of a day, about six people scraped data about microfinance institutions from a bunch of websites, saving MIX an estimated year of manual data entry.

Over the past few months, I worked further with MIX to study who has access to what sorts of financial services. DataKind just put up our blog post about the project. Read the post, or just look at the map and explore the data.

Source:https://blog.scraperwiki.com/2012/05/microfinance-data-scraping/

Thursday 11 December 2014

Content Scraping Reuses Blog Posts without Permission

What do popular blogs and websites such as Social Media Examiner, Copy Blogger, CNN.com, Mashable, and Type A Parent have in common? No, it’s not traffic and a loyal online community, each was a victim of the content scraping site “BuzzMyFx.” Although most bloggers fall victim to content scrapers at least once, the offending website was such an extreme case the backlash against it was fast and furious. Thanks to the quick action of many angry bloggers, BuzzMyFix was taken down in a matter of days.

If you’re not familiar with content scraping sites and aren’t sure why they’re bad and what you can do if you fall prey, read on. Not knowing what steps you can take to remove your content from a scraping site can mean someone else is profiting from your hard work.

What is content scraping?

Content scraping is when a blog or website pulls in other bloggers’ content without permission, in many cases passing it off as their own. Instead of stocking their sites with unique content, they steal entire blog posts. Some do leave the original authors’ bylines, but there are plenty that don’t provide attribution at all. This is not a good thing at all.

If you don’t care about someone taking your content and putting it on their blogs and websites without your permission, you should. These sites are stealing traffic, search engine rankings, and even advertising revenue from bloggers. Moreover, by ignoring scraping sites you’re giving the message that this practice is OK.

It’s not OK.

How was BuzzMyFx different?

BuzzMyFx was a little different from your usual scrapers. Bloggers didn’t just find their content had been posted on this site, they learned their entire blogs — down to the design and comments — had been cloned. Plus, any bloggers checking to see if their blogs were being cloned immediately found themselves being scraped as well. Dozens, if not hundreds of blogs were affected. However, bloggers didn’t take this incident sitting down. They spread the word and contacted the site’s host en masse. Thanks to their swift action, and the high number of complaints, the site was removed quickly.

How can I tell if my content is being scraped?
Fortunately for content creators, scrapers are a lazy bunch. Because their sites are automated, and they don’t check or read the content being pulled, they don’t take many precautions to ensure the people they scrape from don’t find their sites. In fact, they may not even care. Fortunately, this makes it easy to learn if your content is being stolen.

    Link to your own articles — When you write a blog post and link to other (of your own) blog posts within that post, it’s not only good SEO. You also will get pingbacks whenever someone else steals your content because of your interlinks. You’re alerted when someone links to your content, and when content is published with your links, you’ll get that alert.

    Google Alerts — If your name, blog’s name, or other unique keywords are set up as Google Alerts, you’ll receive an e-mail every time content is published with these keywords.

    Analytics — When people click on your links that are in scraped content, it will show up as referring traffic in your analytics program. You should always check referring traffic so you can thank the referring site owner, but also to make sure no one is stealing your content.

What steps can I take to remove my content from a scraper?

If you find your content is being stolen, know you have several options. First, you’ll need to find out who owns the scraping site. You can find this out by doing a WHOis domain lookup, which will enable you to search for the website’s details, including the name of the webmaster, contact info, and the name of the site’s host.

Keep in mind that sometimes the website’s owner will pay extra to have his or her name kept private, but you will always be able to find the name of the host. Once you have this information, you can take the necessary steps to have your content removed.

    Contact the site’s owner personally: Your first step should always be a polite request to remove your content immediately. Let the website owner know he or she is in violation of the Digital Millennium Copyright Act (DMCA), and you will take the necessary steps to report him if he doesn’t comply.

    Contact the site’s host: If you can’t find the name of the person who owns the site, or if he won’t comply with your takedown request, contact the website’s host. You’ll have to prove your content is being stolen. As the host can be held liable for allowing the content theft, it’s in their best interest to contact the website owner and request removal.

    Contact Google: You can contact Google and fill out a form to have them remove the website from their search engines.

    Spread the word: Let all your blogging friends know about content scrapers when you come across them. The more people who take action against content scrapers, the less likely they are to do it again.

Contacting the webmaster with a takedown notice doesn’t have to be an intimidating process, either. The website Plagiarism Today has a wonderful set of stock letters to use to contact webmasters, web hosts, and even Google. All you have to do is insert the necessary information.

Content scrapers and cloners may try to steal your content, but you don’t have to let them. Stand up for what’s yours.

Source: http://www.dummies.com/how-to/content/content-scraping-reuses-blog-posts-without-permiss.html

Monday 8 December 2014

Finding & Removing Spam Blogs Who Scrape Content Onto Free Hosted Blogs

The more popular you become in the blogging world, the more crap you have to deal with!
Content scraping is one chore that can be dealt with swiftly once you understand what to do.
This post contains links which you can use to quickly and easily report content scrapers and spam blogs.
Please share this post and help clean up spam blogs and punish content scrapers.
First step is to find your url’s which have been scraped of content and then get the scrapers spam blog removed.

Some of the tools i use to do this are:

    Google Webmaster Tools
    Google Alerts

Finding Scraped Content
Login to your Google Webmaster Tools account and go to traffic > links to your site.
You should see something like this:
Webmaster Tools Links to Your Site

The first domain is a site which has copied and embedded my homepage which i have already dealt with.
The second site is a search engine.
The third domain is the one i want to deal with.

A common method scrapers use is to post the scraped content from your rss feed on to a free hosted blog like WordPress.com or blogger.com.

Once you click the WordPress.com link in webmaster tools, you’ll find all the url’s which have been scraped.
Links to Your Site

There’s 32 url’s which have been linked to so its simply a matter of clicking each of your links and finding the culprits.

The first link is my homepage which has been linked to by legit domains like WordPress developers.
The others are mainly linked to by spam blogs who have scraped the content and used a free hosted service which in this case is WordPress.com.
WordPress.com Links to Your Site
 Reporting & Removing Spam Blogs

Once you have the url’s of the content scraping blogs as seen in the screenshot above:

    Fill in this basic form to report spam to WordPress.com
    Fill in this form to report copyright content to WordPress.com
    Use this form to report Blogspot and Blogger.com content which has been scraped.
    Fill in one of these forms to remove content from Google

Google Alerts

Its very easy to setup a Google alert to find your post titles when they get scraped.
If you’ve setup the WordPress SEO plugin correctly, you should have included your site title at the end of all your post titles.
Then all you need to do is setup a Google alert for your site title and you’ll be notified every time a scraper links to your content.

Link Notifications

You may also receive a pingback or trackback if you have this feature enabled in your discussion settings.

Link Notifications
RSS Feed Links


Most content scrapers use automated software to scrape the content from RSS feeds.
Make sure you configure your Reading settings so only a summary is displayed.
Reading Settings Feed Summary

Next step is to configure the settings in Yoast’s SEO plugin so links back to your site are included in all RSS feed post summaries.

RSS Feed Links
This will help search engines identify you and your domain as the original author of the content.
There’s other services like copyscape and dmca which can help you protect your sites content if you’re prepared to pay a premium.
That’s it folks.
Its easy to find and get spam sites removed once you know what to do.
Hope you don’t have to deal with this garbage to often.
Ever found out your content has been scraped?
What did you do about it?

Source: http://wpsites.net/blogging/content-scraping-monitoring-and-prevention-tips/

Thursday 11 September 2014

Web Data Extraction / Scraping Data from Kitco Inc. Text Only Market Page

I wish to capture data from

<html>
<head>
<title>Text Only Market Page</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</head>

<body bgcolor="#FFFFFF">
<br><br>
<pre>
<b><font size=6>
  Kitco Inc.

  Text Only Market Page</font></b>

    <a href="http://www.kitco.com/market/">Graphic version of this page</a>

    <a href="http://www.kitco.com/market/LFrate.html">Precious Metals Lease Rates</a> 
    <a href="http://www.kitco.com/gold.londonfix.html">Historical Price Data</a> 
    <a href="http://www.kitco.com/market/marketnews.html">Precious Metals News Headlines</a>

    <font size=4><b><a href="https://online.kitco.com/bullion/completelist_USD.html#gold">Buy gold and silver online direct from Kitco!</a>
   Live quotes for all bullion products.</b></font>


   --------------------------------------------------------------------------------
   London Fix          GOLD          SILVER       PLATINUM           PALLADIUM
                   AM       PM                  AM       PM         AM       PM
   --------------------------------------------------------------------------------
   Jun 19,2012   1628.50   1625.50   28.8100   1486.00   1486.00   629.00   634.00 
   Jun 18,2012   1623.50   1615.50   28.4300   1486.00   1484.00   626.00   628.00 
   --------------------------------------------------------------------------------


                  New York Spot Price
                MARKET IS OPEN
            Will close in 4 hour 25 minutes
   ----------------------------------------------------------------------
   Metals          Bid        Ask           Change        Low       High
   ----------------------------------------------------------------------
   Gold         1619.80     1620.80     -8.90  -0.55%    1616.60  1632.70
   Silver         28.46       28.56     -0.28  -0.97%      28.24    28.95
   Platinum     1479.00     1489.00      0.00   0.00%    1476.00  1500.00
   Palladium     627.00      632.00      0.00   0.00%     622.00   639.00
   ----------------------------------------------------------------------
   Last Update on Jun 19, 2012 at 12:50.59
   ----------------------------------------------------------------------


                Asia / Europe Spot Price
                MARKET IS OPEN
            Will close in 4 hours 25 minutes
   ----------------------------------------------------------------------
   Metals                      Bid          Ask      Change from NY close
   ----------------------------------------------------------------------
   Gold                      1619.80      1620.80     -8.90   -0.55%
   Silver                      28.46        28.56     -0.28   -0.97%
   Platinum                  1479.00      1489.00     +0.00   +0.00%
   Palladium                  627.00       632.00     +0.00   +0.00%
   ----------------------------------------------------------------------
   Last Update on Jun 19, 2012 at 12:50.59
   ----------------------------------------------------------------------


<b>   File created on Tue Jun 19 12:51:04 2012</b>


        <style type="text/css"><!--
 #main_container_footer {width:100%;text-align: center;}
    #main_container_footer #footer_container {width:auto; margin:25px auto 25px auto;}
    #main_container_footer #footer_container ul {margin:0; padding:0;}
    #main_container_footer #footer_container ul li {float:left; display:inline; list-style:none; padding:0 8px; font-family:Verdana, Arial, Helvetica, sans-serif; font-size:12px; color:#000; border-right:1px #000 solid;}
    #main_container_footer #footer_container ul li a {font-family:Verdana, Arial, Helvetica, sans-serif; font-size:12px; color:#000; text-decoration:underline; font-weight:normal;}
    #main_container_footer #footer_container ul li a:hover {color:#ac1a2f; text-decoration:none; font-weight:normal;}
    #main_container_footer #footer_container ul li.no_border {border:0px;}
--></style>
  <table border="0" cellspacing="0" cellpadding="0"><tr><td>
 <div id="main_container_footer">
        <div id="footer_container">
            <ul>
                <li class="no_border"><script type="text/javascript">
copyright=new Date();
update=copyright.getFullYear();
document.write("&copy; "+ update + " Kitco Metals Inc.");
</script></li>
                <li><a href="https://corp.kitco.com/index.html">About Us</a></li>
                <li><a href="http://www.kitco.com/TermsofUse/" target="_top" onclick="Window_open(this.href,'KITCO','top=120,left=250,width=500,height=350'); return false">Website Terms of Use</a></li>
                <li><a href="https://online.kitco.com/help/privacy_policy.html" target="_top" onclick="Window_open(this.href,'KITCO','top=120,left=250,width=500,height=350'); return false">Privacy Policy</a></li>
                <li><a href="http://www.kitco.com/ads/">Advertise With Us</a></li>
                <li><a href="https://corp.kitco.com/en/corporate_culture.html">Careers</a></li>
                <li><a href="https://corp.kitco.com/en/contact.html" target="_top" onclick="Window_open(this.href,'KITCO','top=120,left=250,width=500,height=350'); return false">Contact Us</a></li>
                <li class="no_border"><a href="https://corp.kitco.com/en/feedback.html" target="_top" onclick="Window_open(this.href,'KITCO','top=120,left=250,width=500,height=350'); return false">Feedback</a></li>
            </ul>
        </div>
    </div> 

    </td></tr></table><br /><br />
<script language="JavaScript" type="text/javascript">
<!--
function Window_open (Address) {
  NewWindow = window.open(Address, "Popup", "width=695,height=600,left=100,top=200,resizable=yes,scrollbars=yes");
  NewWindow.focus();
}
// -->
</script>
 <!-- img src="http://www.kitco.com/scripts/counter/counter.pl?txtonlyE.txt" width="1" height="1" -->
<!-- Google-Analytics Code-->
<script type="text/javascript">
  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-4074364-3']);
  _gaq.push(['_trackPageview']);

  (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
  })();
</script>
</body>
</html>

More specifically, I am looking to capture the following data:

--------------------------------------------------------------------------------
London Fix          GOLD          SILVER       PLATINUM           PALLADIUM
               AM       PM                  AM       PM         AM       PM
--------------------------------------------------------------------------------
Jun 19,2012   1628.50   NA        28.8100   1486.00   1486.00   629.00   634.00 
Jun 18,2012   1623.50   1615.50   28.4300   1486.00   1484.00   626.00   628.00 
--------------------------------------------------------------------------------

Does anybody have any suggestions how I can do this using PHP?



1 Answer



Quick and dirty regex method:

$data = file_get_contents('http://www.kitco.com/texten/texten.html');
preg_match_all('/([A-Z]{3,5}\s+[0-9]{1,2},[0-9]{4}\s+([0-9.NA]{2,10}\s+){1,7})/si',$data,$result);

$records = array();
foreach($result[1] as $date) {
    $temp = preg_split('/\s+/',$date);
    $index = array_shift($temp);
    $index.= array_shift($temp);
    $records[$index] = implode(',',$temp);
}
print_R($records);

Note, you'd probably want to add some validation, etc.


Source: http://stackoverflow.com/questions/11103001/web-data-extraction-scraping-data-from-kitco-inc-text-only-market-page

Monday 8 September 2014

Scraping webdata from a website that loads data in a streaming fashion


I'm trying to scrape some data off of the FEC.gov website using python for a project of mine. Normally I use python

mechanize and beautifulsoup to do the scraping.

I've been able to figure out most of the issues but can't seem to get around a problem. It seems like the data is

streamed into the table and mechanize.Browser() just stops listening.

So here's the issue: If you visit http://query.nictusa.com/cgi-bin/can_ind/2011_P80003338/1/A ... you get the first 500

contributors whose last name starts with A and have given money to candidate P80003338 ... however, if you use

browser.open() at that url all you get is the first ~5 rows.

I'm guessing its because mechanize isn't letting the page fully load before the .read() is executed. I tried putting a

time.sleep(10) between the .open() and .read() but that didn't make much difference.

And I checked, there's no javascript or AJAX in the website (or at least none are visible when you use the 'view-

source'). SO I don't think its a javascript issue.

Any thoughts or suggestions? I could use selenium or something similar but that's something that I'm trying to avoid.

-Will

2 Answers

Why not use an html parser like lxml with xpath expressions.

I tried

>>> import lxml.html as lh
>>> data = lh.parse('http://query.nictusa.com/cgi-bin/can_ind/2011_P80003338/1/A')
>>> name = data.xpath('/html/body/table[2]/tr[5]/td[1]/a/text()')
>>> name
[' AABY, TRYGVE']
>>> name = data.xpath('//table[2]/*/td[1]/a/text()')
>>> len(name)
500
>>> name[499]
' AHMED, ASHFAQ'
>>>



Similarly, you can create xpath expression of your choice to work with.


Source: http://stackoverflow.com/questions/9435512/scraping-webdata-from-a-website-that-loads-data-in-a-streaming-

fashion

How can I circumvent page view limits when scraping web data using Python?

I am using Python to scrape US postal code population data from http:/www.city-data.com, through this directory: http://www.city-data.com/zipDir.html. The specific pages I am trying to scrape are individual postal code pages with URLs like this: http://www.city-data.com/zips/01001.html. All of the individual zip code pages I need to access have this same URL Format, so my script simply does the following for postal_code in range:

    Creates URL given postal code
    Tries to get response from URL
    If (2), Check the HTTP of that URL
    If HTTP is 200, retrieves the HTML and scrapes the data into a list
    If HTTP is not 200, pass and count error (not a valid postal code/URL)
    If no response from URL because of error, pass that postal code and count error
    At end of script, print counter variables and timestamp

The problem is that I run the script and it works fine for ~500 postal codes, then suddenly stops working and returns repeated timeout errors. My suspicion is that the site's server is limiting the page views coming from my IP address, preventing me from completing the amount of scraping that I need to do (all 100,000 potential postal codes).

My question is as follows: Is there a way to confuse the site's server, for example using a proxy of some kind, so that it will not limit my page views and I can scrape all of the data I need?

Thanks for the help! Here is the code:

##POSTAL CODE POPULATION SCRAPER##

import requests

import re

import datetime

def zip_population_scrape():

    """
    This script will scrape population data for postal codes in range
    from city-data.com.
    """
    postal_code_data = [['zip','population']] #list for storing scraped data

    #Counters for keeping track:
    total_scraped = 0
    total_invalid = 0
    errors = 0


    for postal_code in range(1001,5000):

        #This if statement is necessary because the postal code can't start
        #with 0 in order for the for statement to interate successfully
        if postal_code <10000:
            postal_code_string = str(0)+str(postal_code)
        else:
            postal_code_string = str(postal_code)

        #all postal code URLs have the same format on this site
        url = 'http://www.city-data.com/zips/' + postal_code_string + '.html'

        #try to get current URL
        try:
            response = requests.get(url, timeout = 5)
            http = response.status_code

            #print current for logging purposes
            print url +" - HTTP:  " + str(http)

            #if valid webpage:
            if http == 200:

                #save html as text
                html = response.text

                #extra print statement for status updates
                print "HTML ready"

                #try to find two substrings in HTML text
                #add the substring in between them to list w/ postal code
                try:           

                    found = re.search('population in 2011:</b> (.*)<br>', html).group(1)

                    #add to # scraped counter
                    total_scraped +=1

                    postal_code_data.append([postal_code_string,found])

                    #print statement for logging
                    print postal_code_string + ": " + str(found) + ". Data scrape successful. " + str(total_scraped) + " total zips scraped."
                #if substrings not found, try searching for others
                #and doing the same as above   
                except AttributeError:
                    found = re.search('population in 2010:</b> (.*)<br>', html).group(1)

                    total_scraped +=1

                    postal_code_data.append([postal_code_string,found])
                    print postal_code_string + ": " + str(found) + ". Data scrape successful. " + str(total_scraped) + " total zips scraped."

            #if http =404, zip is not valid. Add to counter and print log        
            elif http == 404:
                total_invalid +=1

                print postal_code_string + ": Not a valid zip code. " + str(total_invalid) + " total invalid zips."

            #other http codes: add to error counter and print log
            else:
                errors +=1

                print postal_code_string + ": HTTP Code Error. " + str(errors) + " total errors."

        #if get url fails by connnection error, add to error count & pass
        except requests.exceptions.ConnectionError:
            errors +=1
            print postal_code_string + ": Connection Error. " + str(errors) + " total errors."
            pass

        #if get url fails by timeout error, add to error count & pass
        except requests.exceptions.Timeout:
            errors +=1
            print postal_code_string + ": Timeout Error. " + str(errors) + " total errors."
            pass


    #print final log/counter data, along with timestamp finished
    now= datetime.datetime.now()
    print now.strftime("%Y-%m-%d %H:%M")
    print str(total_scraped) + " total zips scraped."
    print str(total_invalid) + " total unavailable zips."
    print str(errors) + " total errors."



Source: http://stackoverflow.com/questions/25452798/how-can-i-circumvent-page-view-limits-when-scraping-web-data-using-python

Sunday 7 September 2014

Web data scraping (online news comments) with Scrapy (Python)

Since you seem like the try-first ask-question later type (that's a very good thing), I won't give you an answer, but a

(very detailed) guide on how to find the answer.

The thing is, unless you are a yahoo developer, you probably don't have access to the source code you're trying to

scrape. That is to say, you don't know exactly how the site is built and how your requests to it as a user are being

processed on the server-side. You can, however, investigate the client-side and try to emulate it. I like using Chrome

Developer Tools for this, but you can use others such as FF firebug.

So first off we need to figure out what's going on. So the way it works, is you click on the 'show comments' it loads

the first ten, then you need to keep clicking for the next ten comments each time. Notice, however, that all this

clicking isn't taking you to a different link, but lively fetches the comments, which is a very neat UI but for our

case requires a bit more work. I can tell two things right away:

    They're using javascript to load the comments (because I'm staying on the same page).
    They load them dynamically with AJAX calls each time you click (meaning instead of loading the comments with the

page and just showing them to you, with each click it does another request to the database).

Now let's right-click and inspect element on that button. It's actually just a simple span with text:

<span>View Comments (2077)</span>

By looking at that we still don't know how that's generated or what it does when clicked. Fine. Now, keeping the

devtools window open, let's click on it. This opened up the first ten. But in fact, a request was being made for us to

fetch them. A request that chrome devtools recorded. We look in the network tab of the devtools and see a lot of

confusing data. Wait, here's one that makes sense:

http://news.yahoo.com/_xhr/contentcomments/get_comments/?content_id=42f7f6e0-7bae-33d3-aa1d-

3dfc7fb5cdfc&_device=full&count=10&sortBy=highestRated&isNext=true&offset=20&pageNumber=2&_media.modules.content_commen

ts.switches._enable_view_others=1&_media.modules.content_comments.switches._enable_mutecommenter=1&enable_collapsed_com

ment=1

See? _xhr and then get_comments. That makes a lot of sense. Going to that link in the browser gave me a JSON object

(looks like a python dictionary) containing all the ten comments which that request fetched. Now that's the request you

need to emulate, because that's the one that gives you what you want. First let's translate this to some normal reqest

that a human can read:

go to this url: http://news.yahoo.com/_xhr/contentcomments/get_comments/
include these parameters: {'_device': 'full',
          '_media.modules.content_comments.switches._enable_mutecommenter': '1',
          '_media.modules.content_comments.switches._enable_view_others': '1',
          'content_id': '42f7f6e0-7bae-33d3-aa1d-3dfc7fb5cdfc',
          'count': '10',
          'enable_collapsed_comment': '1',
          'isNext': 'true',
          'offset': '20',
          'pageNumber': '2',
          'sortBy': 'highestRated'}

Now it's just a matter of trial-and-error. However, a few things to note here:

    Obviously the count is what decides how many comments you're getting. I tried changing it to 100 to see what

happens and got a bad request. And it was nice enough to tell me why - "Offset should be multiple of total rows". So

now we understand how to use offset

    The content_id is probably something that identifies the article you are reading. Meaning you need to fetch that

from the original page somehow. Try digging around a little, you'll find it.

    Also, you obviously don't want to fetch 10 comments at a time, so it's probably a good idea to find a way to fetch

the number of total comments somehow (either find out how the page gets it, or just fetch it from within the article

itself)

    Using the devtools you have access to all client-side scripts. So by digging you can find that that link to

/get_comments/ is kept within a javascript object named YUI. You can then try to understand how it is making the

request, and try to emulate that (though you can probably figure it out yourself)

    You might need to overcome some security measures. For example, you might need a session-key from the original

article before you can access the comments. This is used to prevent direct access to some parts of the sites. I won't

trouble you with the details, because it doesn't seem like a problem in this case, but you do need to be aware of it in

case it shows up.

    Finally, you'll have to parse the JSON object (python has excellent built-in tools for that) and then parse the

html comments you are getting (for which you might want to check out BeautifulSoup).

As you can see, this will require some work, but despite all I've written, it's not an extremely complicated task

either.

So don't panic.

It's just a matter of digging and digging until you find gold (also, having some basic WEB knowledge doesn't hurt).

Then, if you face a roadblock and really can't go any further, come back here to SO, and ask again. Someone will help

you.


Source: http://stackoverflow.com/questions/20218855/web-data-scraping-online-news-comments-with-scrapy-python

Saturday 6 September 2014

A good web data extraction/screen scraper program?

I need to capture product data from a site on a regular basis and wondered if any one knows of a good software program? I've trialed Mozenda but its a monthly subscription and pricey in the long term. Obviously something thats free would be best but I don't mind paying either. Just need a decent program thats reliable and doesn't require much programming knowledge.

You can try ScraperWiki.com if you know python.

I've experimented with Screen-Scraper and found it easy to use. The application comes in multiple versions: basic (which is free), professional, and enterprise. Also, multiple platforms are supported.

Hire a programmer to do it so that there is only a one off cost. I often see similar projects on freelancing websites like Elance and oDesk.

I really like iMacros. You can give it a test drive to see if it meets your needs with the totally free Firefox extension (there's also IE versions), but there are also more full featured application and "server" versions that have more features and ability to do thing in an unattended manner.

Here are some other alternatives to consider:

    License the data from the provider. Call em up and ask 'em.

    Use Amazon Mechanical Turk to get humans to copy and paste and format it for ya. They are cheap.

    For automation, it depends on how complicated the HTML is and how often it changes. You could use Excel's Web Data Import if it's really simple.


You can use irobot from IRobotSoft, which is totally free, and provides more functionalityies than other paid software. Watch demos here http://irobotsoft.com/help/ for how simple it is.

Questions on their forum were answered very quickly.


Source: http://stackoverflow.com/questions/2334164/a-good-web-data-extraction-screen-scraper-program

Thursday 4 September 2014

How to login to website and extract data using PHP [closed]

I have installed the tiny tiny rss on to my computer (Windows) and also have Xampp installed (localhost).

I want to be able to use PHP to extract data from the Tiny tiny RSS webpage.

I have tried this it which just opens the front page:

<?php
$homepage = file_get_contents('my install tiny tiny rss url');
echo $homepage;
?>

But how do I login and extract the data.

You can use cURL to send post data and headers. To login you need to replicate the exact data exchange between the client and the server.


SOurce: http://stackoverflow.com/questions/20611918/how-to-login-to-website-and-extract-data-using-php

Data Scraping from PDF and Excel

I am doing a little data scraping, There are 3 types of file from which i am scraping data.

1- HTML
2- PDF
3- Excel(xls)

For HTML i am comfortable, i am using HTML Agility for that.

For PDF and excel i need suggestions from anyone.



Concerning Excel. If you are in a MS environment you can either do Office Automation or use OLEDB. In a Java

environment look at Apache POI.

EDIT: Concerning PDF in Java try Apache PDFBox . Can also work in .NET using IKVM

I can recommend Cogniview's PDF2XL, a reasonably inexpensive commercial product, to extract data from tables in PDF

files into Excel. We have used it with great success.

HTML Agility is a library. Its good to use. But then, why do you need separate tools for different data extraction

purposes? Use Automation Anywhere to extract data from any source. As far as I know, it would work for all the three

sources you have specified. Google it.

Source: http://stackoverflow.com/questions/3147803/data-scraping-from-pdf-and-excel

Wednesday 3 September 2014

Excel VBA Data Mining Real-Time Data from a Web Page that Refreshes Data

I want to capture real-time data that updates into a table on a webpage; I prefer capturing it into excel using VBA, but I will write it in .NET C# or VB if I that is easier.

the data updates about 1 or 2 seconds, and I want to just grab the latest data quotes and log it into my spreadsheet; the table names are the same, only the data refreshes, and it does so automatically on the web page.

I've done a lot of Excel VBA and I know how to download a URL to a file--this is NOT what I want; I want to gain access to my webpage that is active and grab the data updates after I've logged into my site and selected a webpage that I like.

Is there a simple way to access this data on the webpage from Excel or .Net? Because it refreshes no more than once every 1 or 2 seconds, it is easy to just keep checking it for updates, and I can compare the latest data to see if it actually refreshed.


In Excel 2003, use Data/Import External Data/New Web Query
Browse to your page and select the table you want to import.
After that you can either do a manual Refresh, or use a timer procedure to do something like:

Source: http://stackoverflow.com/questions/9855794/excel-vba-data-mining-real-time-data-from-a-web-page-that-refreshes-data

Tuesday 2 September 2014

Need to pull data from a website…web query? macro?


I have a list of every DOT # (Dept. of Trans.) in the country. I want to find out insurance effective date for each one of these companies. If you go to http://li-public.fmcsa.dot.gov --> "continue" --> then from the dropdown select "carrier search" and hit "go" it'll take you to a search form (that is the only way to get to this screen).

From there, you can input a DOT # X (use 61222 as an example) and it'll bring you to another screen. Click "view report in HTML" and then down on the bottom you'll see "Active/Pending Insurance". I want to pull the "effective date" from that page and stick it in the spreadsheet next to the DOT # X that I already know.

Of the thousands of DOT #'s in my list, not all will have filings on this website, if that makes a difference.

Can this be done with a Macro or Excel Web Query? I know I probably sound like a total novice, but I'd appreciate any help I could get.

Can you do it? Frankly even if you could you'd lock up the spreadsheet while it's doing that processing. And in the end, how would you handle an error half-way through?

I'd not do this in a client-facing application. This sounds more like something to do in server-side app that can do the processing and gather the information in a more controlled environment. Then you Excel spreadsheet could query that app and get the information in one fell swoop. Error handling is much simpler and you don't end up sitting there staring at Excel why it works its way through thousands of web sites. It was not built to do that elegantly.

What do you write the web service I'm describing in? Well it depends on your preference. Me, I'd write it in Ruby on Rails since it can easily handle the scraping aspect of the task and can report the data out easily as well. But it really falls back to whatever you're most comfortable coding in.


Source: http://stackoverflow.com/questions/15286429/need-to-pull-data-from-a-website-web-query-macro

How to extract data from web 2.0 graphs using a scraper


I have recently come across a web page containing a graph object that displays the (x, y) values on the object as the

mouse is rolled across it. Is there any way to automate the extraction of this data?

How is the graph data loaded? If embedded in the page source then you can extract it with xpath or regex. Else use

Firebug to see how it is loaded.



You will need a solution that works inside the web browser, so the AJAX/Javascript is properly rendered.

I have used iMacros with good success for web scraping in the past. There are free/open-source and "PRO" paid editions

(comparison table here).

Another option is always to custom code something with the Microsoft webbrowser control.


Source: http://stackoverflow.com/questions/3980774/how-to-extract-data-from-web-2-0-graphs-using-a-scraper

Legality of Web Scraping vs Normal Use


I know the topic of web scraping has been discussed before (example), and I understand it's a bit of a grey area depending on a lot of factors (e.g. website's terms of use).

What I'd like to ask is: how is web scraping any different from (a) how we access the webpage via a web browser, and (b) how web crawlers (e.g. Google) download and index webpages?

Without knowing the legal background, I can't help but think that they're all just HTTP requests. If web scraping is illegal, then so should crawling and indexing (for instance be illegal).

Of course if your program is hitting the server so hard that it causes a denial of service, it's a different story altogether... my point is simply accessing and using data that is already open to the public.



I know this is a dead thread, but it would be nice to place some legal implications here due to its ranking in my Google Search. I cannot help but figure I am not the only one who searches like I do.

Legally, in the US, there are a few factors that seem to be important.

    Are you doing anything that is akin to hacking or gaining unauthorized access via the Computer Fraud and Abuse Act. Exploiting vulnerabilities and passing SQL in the URL to open a database no matter how bad the idiot programming like that was is illegal with a 15 year sentence (see the cases where an individual exploited security vulnerabilities in Verizon). Also, add a time out even if you round robin or use proxies. DDoS attacks are attacks. 1000 requests per second can shut down a lot of servers providing public information. The result here is up to 15 years in jail.

    Copyright Law: As mentioned, pure replication of data is illegal. Even 4% replication has been deemed a breach. With the recent gutting of the DMCA, a person is even more vulnerable to civil and criminal penalties.

    Trespass and Chattels: The following from wikipedia says it all.

    U.S. courts have acknowledged that users of "scrapers" or "robots" may be held liable for committing trespass to chattels,[5][6] which involves a computer system itself being considered personal property upon which the user of a scraper is trespassing. The best known of these cases, eBay v. Bidder's Edge, resulted in an injunction ordering Bidder's Edge to stop accessing, collecting, and indexing auctions from the eBay web site.

    Paywalls and Product: When going behind paywalls and breaching contract by clicking an agreement not to do something and then doing it, you add fuel to the protection of negligence v. willingness [an issue for damages and penalties not guilt] in civil and any criminal trials. (sorry originally wanted to say ignorance but it really isn't a defense)

    International: EU law and other law is way more lax. Corporations with big budgets dominate our legal landscape. They control the system in a very real way with their $$$.

Basically, get public information and information that is available without going behind a pay wall. Think like a user of the internet and combine a bunch of sources into a unique product. Don't just 'steal' an entire site (it isn't really stealing if it is a government site that offers public data especially for download but is if you download all or even more than a couple of the listings on ebay). Read the terms and conditions to know who actually owns the content.

Here are a few examples. Trulia owns its information but you could use it to go to an agents website and collect a legal amount of information. The legal amount is determinable. However, a public MLS listing lookup site with no agreement or terms and offering data to the public is fair game. The MLS numbers lists, however, are normally not fair game.

If a researcher can get to data, so can you. If a researcher needs permission, so do you. A computer is like having a million corporate researchers at your disposal.

AS for company policy, it is usually used internally to shield from liability and serves as a warning but is not entirely enforceable. The legal parts letting you know about copyrights and such are and usually are supposed to be known by everyone. Complete ignorance is not a legal protection. It does provide a ground set of rules. Be nice, or get banned is that message as far as I know.

My personal strategy is to start with public data and embellish it within legal means.


Source: http://stackoverflow.com/questions/14735791/legality-of-web-scraping-vs-normal-use

Anyone knows an online tool that can scrape a page and create a REST API for the scraped data?


I'm looking for a SaaS solution that is able to login to a platform, scrape data (reports) and then allow accessing the data through an API. I have some reporting platforms that provide web reporting and email reporting but with no API. Online reporting doesn't help and email reporting, although can be automated and scraped, isn't so reliable.

If you are willing to do the scraping through your own connection, have a look at Import IO. They have a desktop application that you use to teach the system how to scrape a page, and then you run the crawler from that application - and you can run it for as long as you like, as far as I can tell.

You may then upload your data to the Import cloud, from where it is available via an API on the import.io servers. Useful data can be made public to donate it "to the commons" if you wish.


I did some more digging, found iMacros as a possible solution. Its Windows based, which is a drawback in my case, but it does allow automation of the scraping and afterwards interaction via common web scripting languages like PHP and ASP.net.


If you are familiar with jQuery, I think you can use node.js and Cheerio module, then you can create a simple application to do auto scraping. Actually I have already built a site to do on line web scraping based on the above mentioned tech, the site is www.datafiddle.net, you can take a look at it.


Source: http://stackoverflow.com/questions/19646028/anyone-knows-an-online-tool-that-can-scrape-a-page-and-create-a-rest-api-for-the

Tuesday 26 August 2014

How do you scrape AJAX pages?

Overview:

All screen scraping first requires manual review of the page you want to extract resources from. When dealing with AJAX you usually just need to analyze a bit more than just simply the HTML.

When dealing with AJAX this just means that the value you want is not in the initial HTML document that you requested, but that javascript will be exectued which asks the server for the extra information you want.

You can therefore usually simply analyze the javascript and see which request the javascript makes and just call this URL instead from the start.

Example:

Take this as an example, assume the page you want to scrape from has the following script:

<script type="text/javascript">
function ajaxFunction()
{
var xmlHttp;
try
  {
  // Firefox, Opera 8.0+, Safari
  xmlHttp=new XMLHttpRequest();
  }
catch (e)
  {
  // Internet Explorer
  try
    {
    xmlHttp=new ActiveXObject("Msxml2.XMLHTTP");
    }
  catch (e)
    {
    try
      {
      xmlHttp=new ActiveXObject("Microsoft.XMLHTTP");
      }
    catch (e)
      {
      alert("Your browser does not support AJAX!");
      return false;
      }
    }
  }
  xmlHttp.onreadystatechange=function()
    {
    if(xmlHttp.readyState==4)
      {
      document.myForm.time.value=xmlHttp.responseText;
      }
    }
  xmlHttp.open("GET","time.asp",true);
  xmlHttp.send(null);
  }
</script>

Then all you need to do is instead do an HTTP request to time.asp of the same server instead. Example from w3schools.


Sporce: http://stackoverflow.com/questions/260540/how-do-you-scrape-ajax-pages

using Perl to scrape a website

I am interested in writing a perl script that goes to the following link and extracts the number 1975: https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3ACalifornia%20%2Bevent_place_level_2%3A%22San%20Diego%22%20%2Bbirth_year%3A1923-1923~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219

That website is the amount of white men born in the year 1923 who live in San Diego County, California in 1940. I am trying to do this in a loop structure to generalize over multiple counties and birth years.

In the file, locations.txt, I put the list of counties, such as San Diego County.

The current code runs, but instead of the # 1975, it displays unknown. The number 1975 should be in $val\n.

I would very much appreciate any help!

#!/usr/bin/perl

use strict;

use LWP::Simple;

open(L, "locations26.txt");

my $url = 'https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3A%22California%22%20%2Bevent_place_level_2%3A%22%LOCATION%%22%20%2Bbirth_year%3A%YEAR%-%YEAR%~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219';

open(O, ">out26.txt");
 my $oldh = select(O);
 $| = 1;
 select($oldh);
 while (my $location = <L>) {
     chomp($location);
     $location =~ s/ /+/g;
      foreach my $year (1923..1923) {
                 my $u = $url;
                 $u =~ s/%LOCATION%/$location/;
                 $u =~ s/%YEAR%/$year/;
                 #print "$u\n";
                 my $content = get($u);
                 my $val = 'unknown';
                 if ($content =~ / of .strong.([0-9,]+)..strong. /) {
                         $val = $1;
                 }
                 $val =~ s/,//g;
                 $location =~ s/\+/ /g;
                 print "'$location',$year,$val\n";
                 print O "'$location',$year,$val\n";
         }
     }

Update: API is not a viable solution. I have been in contact with the site developer. The API does not apply to that part of the webpage. Hence, any solution pertaining to JSON will not be applicbale.



Source: http://stackoverflow.com/questions/14654288/using-perl-to-scrape-a-website

Monday 25 August 2014

Data Scraping using php

Here is my code

    $ip=$_SERVER['REMOTE_ADDR'];

    $url=file_get_contents("http://whatismyipaddress.com/ip/$ip");

    preg_match_all('/<th>(.*?)<\/th><td>(.*?)<\/td>/s',$url,$output,PREG_SET_ORDER);

    $isp=$output[1][2];

    $city=$output[9][2];

    $state=$output[8][2];

    $zipcode=$output[12][2];

    $country=$output[7][2];

    ?>
    <body>
    <table align="center">
    <tr><td>ISP :</td><td><?php echo $isp;?></td></tr>
    <tr><td>City :</td><td><?php echo $city;?></td></tr>
    <tr><td>State :</td><td><?php echo $state;?></td></tr>
    <tr><td>Zipcode :</td><td><?php echo $zipcode;?></td></tr>
    <tr><td>Country :</td><td><?php echo $country;?></td></tr>
    </table>
    </body>

How do I find out the ISP provider of a person viewing a PHP page?

Is it possible to use PHP to track or reveal it?

Error: http://i.imgur.com/LGWI8.png

Curl Scrapping

<?php
$curl_handle=curl_init();
curl_setopt( $curl_handle, CURLOPT_FOLLOWLOCATION, true );
$url='http://www.whatismyipaddress.com/ip/132.123.23.23';
curl_setopt($curl_handle, CURLOPT_URL,$url);
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, Array("User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.15) Gecko/20080623 Firefox/2.0.0.15") );
curl_setopt($curl_handle, CURLOPT_CONNECTTIMEOUT, 2);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl_handle, CURLOPT_USERAGENT, 'Your application name');
$query = curl_exec($curl_handle);

curl_close($curl_handle);
preg_match_all('/<th>(.*?)<\/th><td>(.*?)<\/td>/s',$url,$output,PREG_SET_ORDER);
echo $query;
$isp=$output[1][2];

$city=$output[9][2];

$state=$output[8][2];

$zipcode=$output[12][2];

$country=$output[7][2];
?>
<body>
<table align="center">
<tr><td>ISP :</td><td><?php echo $isp;?></td></tr>
<tr><td>City :</td><td><?php echo $city;?></td></tr>
<tr><td>State :</td><td><?php echo $state;?></td></tr>
<tr><td>Zipcode :</td><td><?php echo $zipcode;?></td></tr>
<tr><td>Country :</td><td><?php echo $country;?></td></tr>
</table>
</body>

Error: http://i.imgur.com/FJIq6.png

What's is wrong with my code here? Any alternative code , that i can use here.

I am not able to scrape that data as described here. http://i.imgur.com/FJIq6.png

P.S. Please post full code. It would be easier for me to understand.



Source: http://stackoverflow.com/questions/10461088/data-scraping-using-php

PDF scraping using R

I have been using the XML package successfully for extracting HTML tables but want to extend to PDF's. From previous questions it does not appear that there is a simple R solution but wondered if there had been any recent developments

Failing that, is there some way in Python (in which I am a complete Novice) to obtain and manipulate pdfs so that I could finish the job off with the R XML package

Extracting text from PDFs is hard, and nearly always requires lots of care.

I'd start with the command line tools such as pdftotext and see what they spit out. The problem is that PDFs can store the text in any order, can use awkward font encodings, and can do things like use ligature characters (the joined up 'ff' and 'ij' that you see in proper typesetting) to throw you.

pdftotext is installable on any Linux system



Source: http://stackoverflow.com/questions/7918718/pdf-scraping-using-r

Sunday 24 August 2014

Php Scraping data from a website

I am very new to programming and need a little help with getting data from a website and passing it into my PHP script.

The website is http://www.birthdatabase.com/.

I would like to plug in a name (First and Last) and retrieve the result. I know you can query the site by passing the name in the URL, but I am having problems scraping the results.

http://www.birthdatabase.com/cgi-bin/query.pl?textfield=FIRST&textfield2=LAST&age=&affid=

I am using the file_get_contents($URL) function to get the page but need help after that. Specifically, I would like to scrape only the results from a certain state if there are multiple results for that name.



You need the awesome simple_html_dom class.

With this class you can query the webpage's DOM in a similar way to jQuery.

First include the class in your page, then get the page content with this snippet:

$html = file_get_html('http://www.birthdatabase.com/cgi-bin/query.pl?textfield=' . $first . '&textfield2=' . $last . '&age=&affid=');

Then you can use CSS selections to scrape your data (something like this):

$n = 0;
foreach($html->find('table tbody tr td div font b table tbody') as $element) {
    @$row[$n]['tr']  = $element->find('tr')->text;
    $n++;
}

// output your data
print_r($row);



Source: http://stackoverflow.com/questions/15601584/php-scraping-data-from-a-website

Obtaining reddit data

I am interested in obtaining data from different reddit subreddits. Does anyone know

if there is a reddit/other api similar like twitter does to crawl all the pages?


Yes, reddit has an API that can be used for a variety of purposes such as data

collection, automatic commenting bots, or even to assist in subreddit moderation.

There are a few places to discover information on reddit's API:

    github reddit wiki -- provides the overview and rules for using reddit's API

(follow the rules)
    automatically generated API docs -- provides information on the requests needed to

access most of the API endpoints
    /r/redditdev -- the reddit community dedicated to answering questions both about

reddit's source code and about reddit's API

If there is a particular programming language you are already familiar with, you

should check out the existing set of API wrappers for various languages. Despite my

bias (I am the package maintainer) I am quite certain PRAW, for python, has support

for the largest number of reddit API features.



Source: http://stackoverflow.com/questions/14322834/obtaining-reddit-data

Saturday 23 August 2014

Scraping data in dynamic sites

I'm trying to scrape data from our local government. What I want is address from kids adoption offices. Here, in Brazil, all adoptions go through the government. So I have the URL of one office, there are 2 or 3 thousands more. But if I can manage to get one, the others will be easy. I made many attempts, bellow I show three.

The problem could be related to a Javascript (Ajax maybe) that refresh the page.

Note: I am not a PHP developer.

First attempt

echo '<html><head></head><body>';
echo '<h1>Scraper PHP GET 1</h1>';

echo ini_get("allow_url_fopen");
echo ini_get("allow_url_fopen");

// I used this url for test
//$url = 'http://www.portaldaadocao.com.br';

//This is the URL that I really want
$url = 'http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?transacao=CONSULTA&vara=2673';

$html = file_get_contents($url);
var_dump($html);

echo '</body></html>';

// Output
// 11
// Warning:
file_get_contents(http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?
transacao=CONSULTA&vara=2673) [function.file-get-contents]: failed to open stream: HTTP
request failed! HTTP/1.1 404 Not Found in /home/rsl/www/sc01_get.php on line 14
// bool(false)

Second attempt

echo '<html><head></head><body>';
echo '<h1>Scraper PHP CURL 3</h1>';

// I used this url for test
//$url = 'http://www.portaldaadocao.com.br';

//This is the URL that I really want
$url = 'http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?transacao=CONSULTA&vara=2673';

$curl = curl_init($url);
@curl_setopt($curl, CURLOPT_POSTFIELDS, "foo");
@curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
@curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "POST");;

$html=@curl_exec($curl);

if (!$html) {
    echo "<br />cURL error number:" .curl_errno($curl);
    echo "<br />cURL error:" . curl_error($curl);
    exit;
}
else{
   echo '<br>begin HTML[';
    echo  $html;
   echo '<br>]end html ';
}
echo '</body></html>';

// Output
// 1

third attempt

function curl($url){
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
    curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.6 (KHTML, like Gecko) Chrome/16.0.897.0 Safari/535.6');
    curl_setopt($ch, CURLOPT_HEADER, true);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
    curl_setopt($ch, CURLOPT_COOKIEJAR, "cookie.txt");
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
    curl_setopt($ch, CURLOPT_REFERER, "http://www.windowsphone.com");

    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

echo '<html><head></head><body>';
echo '<h1>Scraper PHP CURL 5</h1>';

// I used this url for test
//$url = 'http://www.portaldaadocao.com.br';

//This is the URL that I really want
$url = 'http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?transacao=CONSULTA&vara=2673';

$curl = curl_init($url);
@curl_setopt($curl, CURLOPT_POSTFIELDS, "foo");
@curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
@curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "POST");;

$html=@curl($curl);


if (!$html) {
    echo "<br />cURL error number:" .curl_errno($curl);
    echo "<br />cURL error:" . curl_error($curl);
    exit;
}
else{
    echo '<br>begin HTML[';
    echo  $html;
    echo '<br>]end html ';
}
echo '</body></html>';

// Output
// cURL error number:0
// cURL error:

If the pages are really ajax based meaning the information that you need to scrape is loaded or shown through javascript execution, you will need another approach. You would need to automate with a real browser. You can go the Selenium route which can be written in a number of languages or use CasperJS with Javascript as the programming language.



Source: http://stackoverflow.com/questions/24611046/scraping-data-in-dynamic-sites