Search engines operate by creating a copy, or index, of the pages on the web. It is this copy which the system searches when a user inputs a query. The user is then directed to the live web page.
Publishers of websites can stop search engines from indexing their pages by indicating their wishes in a robots.txt file on the site.
Though web publishers have not taken action in the UK against search engine companies for the copying of their pages, Lord Lucas has proposed an amendment to the controversial Digital Economy Bill creating the right to make those copies.
"Every provider of a publicly accessible website shall be presumed to give a standing and non-exclusive license to providers of search engine services to make a copy of some or all of the content of that website, for the purpose only of providing said search engine services," says his amendment.
Lord Lucas's amendment still gives publishers the right to refuse permission to copy files, and their existing robots.txt file would count as a legally binding refusal, the amendment says.
"The presumption referred to … may be rebutted by explicit evidence that such a licence was not granted," it says. "Such explicit evidence shall be found only in the form of statements in a machine-readable file to be placed on the website and accessible to providers of search engine services."
The Digital Economy Bill seeks to pass into law the parts of the Digital Britain report that require legislation. In addition it would create a law not recommended by that report that would require internet service providers to sever connections used by copyright-infringing file sharers in some circumstances.
Lord Lucas has proposed a number of amendments to the Bill. He wants the proposed legislation to include the right of people accused of copyright infringement to take action against the accuser if those claims are groundless; and he wants the law to require that anyone claiming that their copyright has been infringed actually quantify that damage when making that claim.
The suggestion that search engines should be exempt from copyright would regularise a situation that is accepted common practice, said technology law expert Struan Robertson of Pinsent Masons, the law firm behind OUT-LAW.COM.
"This is how all search engines behave, but there has always been a theoretical argument that what is happening is copyright infringement on a massive scale," said Robertson. "There is also a strong counter-argument that by making material available on a public website a publisher is giving an implied licence to a search engine."
"There has never been a test case in the UK but in the US a lawyer posted an article online without the robots.txt file. When it was indexed he sued Google for copyright infringement, but the court said that he knew what would happen and he had therefore given an implied licence for its copying by a search engine," said Robertson.
A Belgian court found differently when newspaper body Copiepresse took Google to court arguing that its indexing of newspaper websites was copyright infringement. The court agreed and Google and Copiepresse came to an arrangement by which Google's search engine was allowed to index the sites but its Google News service was not.
"This would put into law something that is common practice but that has never actually been codified in the UK," said Robertson. "Most online publishers want to feature in search engines, but some expect to be paid by search engines for making their content available and they see today's copyright laws as giving them leverage to demand money. Lord Lucas's change would break that leverage."
In November, News International chairman Rupert Murdoch accused major search engines of stealing content.
"The people who simply just pick up everything and run with it – steal our stories, we say they steal our stories – they just take them," he told Sky News Australia. "That's Google, that's Microsoft, that's Ask.com, a whole lot of people ... they shouldn't have had it free all the time, and I think we've been asleep."