当我尝试使用Jsoup解析大量的超文本标记语言文档时,我得到了一个SocketTimeoutException
。
例如,我得到了一个链接列表:
<a href="www.domain.com/url1.html">link1</a>
<a href="www.domain.com/url2.html">link2</a>
<a href="www.domain.com/url3.html">link3</a>
<a href="www.domain.com/url4.html">link4</a>
对于每个链接,我解析链接到URL的文档(来自href属性),以获取这些页面中的其他信息。
所以我可以想象这会花费很多时间,但是如何在这里关闭这个异常是整个堆栈跟踪:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read1(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source)
at sun.net.www.http.HttpClient.parseHTTP(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
at java.net.HttpURLConnection.getResponseCode(Unknown Source)
at org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:381)
at org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:364)
at org.jsoup.helper.HttpConnection.execute(HttpConnection.java:143)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:132)
at app.ForumCrawler.crawl(ForumCrawler.java:50)
at Main.main(Main.java:15)
发布于 2011-07-04 20:40:52
我想你能做到
Jsoup.connect("...").timeout(10 * 1000).get();
这会将超时设置为10s。
发布于 2018-02-06 22:36:46
https://jsoup.org/apidocs/org/jsoup/Connection.html上有错误。默认超时不是30秒。是3秒。看看代码中的javadoc就知道了。上面写着3000毫秒。
发布于 2019-01-11 21:05:11
我也犯了同样的错误:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
而且只有设置.userAgent(Opera)
对我有效。
所以我使用Connection类的Connection userAgent(String userAgent)
方法来设置Jsoup用户代理。
类似于:
Jsoup.connect("link").userAgent("Opera").get();
https://stackoverflow.com/questions/6571548
复制相似问题