各位老哥求解, 如能帮忙解决问题, 口令红包私人感谢.
谢谢各位.
好奇今日热榜这些热榜站是如何进行批量爬取的
cheerio 抓取静态网页, Puppeteer 批量爬取, 性能好慢, 频繁超时报错找不到原因
page.goto 设置多长时间都超时, 30s 60s 90s
本地 windows 运行又没有问题, 远程服务器 vps 小鸡动不动报超时
数据处理过程错误: TimeoutError: Navigation timeout of 30000 ms exceeded
at new Deferred (/usr/local/node_modules/puppeteer-core/lib/cjs/puppeteer/util/Deferred.js:59:34)
at Deferred.create (/usr/local/node_modules/puppeteer-core/lib/cjs/puppeteer/util/Deferred.js:21:16)
at new LifecycleWatcher (/usr/local/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/LifecycleWatcher.js:66:60)
at CdpFrame.goto (/usr/local/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Frame.js:143:29)
at CdpFrame.<anonymous> (/usr/local/node_modules/puppeteer-core/lib/cjs/puppeteer/util/decorators.js:98:27)
at CdpPage.goto (/usr/local/node_modules/puppeteer-core/lib/cjs/puppeteer/api/Page.js:588:43)
at fetchData (file:///usr/local/script/%E8%B4%A2%E7%BB%8F%E7%83%AD%E6%A6%9C.js:51:18)
at async executeProcess (file:///usr/local/script/%E8%B4%A2%E7%BB%8F%E7%83%AD%E6%A6%9C.js:108:24)
async function fetchData(page, name, url, hrefSelector) {
const maxRetries = 3; // Maximum number of attempts
let attempts = 0;
while (attempts < maxRetries) {
try {
attempts++;
await page.goto(url, { timeout: 1000 * 30 });
await page.waitForSelector(hrefSelector, { timeout: 1000 * 30 });
const results = await page.$$eval(hrefSelector, anchors =>
anchors.map(anchor => ({ href: anchor.href, text: anchor.textContent.trim() }))
);
const trade_date = getCurrentDateTime();
return { name, news: results, trade_date };
} catch (error) {
if (attempts < maxRetries) {
console.warn(`获取数据报错 ${url}. Retry attempt ${attempts}...`);
await delay(2000); // Wait for 2 seconds before retrying
} else {
console.error(`获取数据报错 ${url} after ${attempts} attempts:`, error);
throw error;
}
}
}
}
import puppeteer from 'puppeteer-extra';
import StealthPlugin from 'puppeteer-extra-plugin-stealth';
import AnonymizeUaPlugin from 'puppeteer-extra-plugin-anonymize-ua'
puppeteer.use(StealthPlugin());
puppeteer.use(AnonymizeUaPlugin());
// 启动浏览器和页面
const browser = await puppeteer.launch({
args: [
"--disable-setuid-sandbox",
"--no-sandbox",
"--disable-gpu",
"--no-first-run",
"--disable-dev-shm-usage",
"--single-process"
],
headless: true
});
console.log('启动浏览器 √');
const page = await browser.newPage();
// 设置拦截请求,屏蔽不必要的资源请求
await page.setRequestInterception(true);
page.on('request', (request) => {
const resourceType = request.resourceType();
if (['image', 'stylesheet', 'font'].includes(resourceType)) {
request.abort();
} else {
request.continue();
}
});
// 抓取数据
const allContents = [];
for (const data of config) {
const contents = await fetchData(page, data.name, data.url, data.hrefSelector);
allContents.push(contents);
}
{
name: '第一财经',
url: 'https://www.yicai.com/news/',
hrefSelector: '#newsRank div:nth-child(1) > ul > li a'
},
{
name: '金融界',
url: 'https://stock.jrj.com.cn/',
hrefSelector: 'ul.opportunity-list > li a'
},
{
name: '八阕',
url: 'https://news.popyard.space/cgi-mod/threads.cgi?lan=cn&r=0&cid=11&t=all',
hrefSelector: 'div#page_1 > table b > a'
}
本地 Windows 爬取第一财经也报错..
Extra IPv4None
RAM2.5 GB RAM (Included)
CPU Cores2 CPU Cores (Included)
Operating SystemDebian 12 64 Bit (Recommended Min. 2 GB RAM)
LocationSan Jose, CA (Test IP: 192.210.207.88)
1
dedad558 OP 技术小白, 已经 google chatgpt 查了很多资料, 自己测试了很多遍, 依然无法解决, 求助求助
|
2
dedad558 OP 如果 python 能解决也行, ;爬这三个网站容易报错超时, 不太懂反爬
``` { name: '第一财经', url: 'https://www.yicai.com/news/', hrefSelector: '#newsRank div:nth-child(1) > ul > li a' }, { name: '金融界', url: 'https://stock.jrj.com.cn/', hrefSelector: 'ul.opportunity-list > li a' }, { name: '八阕', url: 'https://news.popyard.space/cgi-mod/threads.cgi?lan=cn&r=0&cid=11&t=all', hrefSelector: 'div#page_1 > table b > a' } ``` puppeteer 本地偶尔也能爬取到, 但时不时也经常超时报错, 懵逼 |
3
macaodoll 151 天前 via Android
两个字:逆向
|
5
dedad558 OP 而且我也不懂逆向哈哈 惭愧
|
6
wzdsfl 151 天前 1
你这么爬相当于访问了一次网页,把页面上无用的 img/css/js/html 都给爬了一遍,直接抓接口,接口加密的就加断点扒参数,身份校验的就挂上 cookie ,效率不比你这快多了
|
9
wushenlun 151 天前
这不就是明文么
```html <div class="cc-cd-cb-ll"> <span class="s ">70</span> <span class="t">19 块?森马棉致冰丝裤,我没看错吧! 原价¥54.9 券后¥19.9</span> <span class="e">热销 450 件(近 2 小时)</span> </div> </a> <a href="https://tophub.today/link?domain=taobao.com&url=https%3A%2F%2Fremai.today%2Flink%2F1%2F2gaR2v3hotge6Z5J6OFaq7TDtD-36ON23KC05mnekbjsO" target="_blank" rel="nofollow noopener" itemid="171347537"> <div class="cc-cd-cb-ll"> <span class="s ">71</span> <span class="t"> [全尺寸一个价] 床笠床罩床套保护套 原价¥139.9 券后¥39.9</span> <span class="e">热销 446 件(近 2 小时)</span> </div> ``` |
11
longlonglanguage 151 天前
那种语言都能爬,应该是一个一个网站做了适配,然后存入数据库,最后后台聚合数据,网页再显示出来。
|
12
dedad558 OP @longlonglanguage 嗯, 现在我改为抓接口了, 接口抓不到抓 DOM
|
13
nx6Ta67v2A43frV2 151 天前 via iPhone
@dedad558 肯定啊,大公司安全人员又不是吃干饭的。
要抓这个数据,并不是一锤子买卖,而是个长期对抗的过程。 对方服务端防御规则一直在变,网关上还会离线识别扫刷子流量。 检测到疑似刷子,会质询你,也就是人机检测,过不去就关小黑屋。百度和 cloudflare 都是这样。 |
14
nx6Ta67v2A43frV2 151 天前 via iPhone
另外,这事儿是违法的,或者在法律的灰色地带。
网上有程序员抓政务网站上的公开数据,把别人网站搞挂了导致坐牢的新闻。 他的程序写得有问题的,导致形成死循环,反复爬人家,恰好政务网站又比较脆弱。 |
15
root71370 150 天前
之前收藏了个仓库,你看看它怎么做的
https://github.com/imsyy/DailyHotApi |
16
yuaotian 150 天前
糊涂啊,你再去爬他们的不就行了🤣😏
|
17
mumbler 150 天前
爬虫是需要系统学习+刻意练习的,你什么都不知道,帮都没法帮。有 chatgpt 的时代,页面数据直接给 gpt 就能自动提取数据,你只需要拿到页面数据就行了
|