python通过urllib2爬网页上种子下载示例

python通过urllib2爬网页上种子下载示例,第1张

概述通过urllib2、re模块抓种子思路1.用程序登录论坛(如果需要登录才能访问的版块)

通过urllib2、re模块抓种子

思路

1.用程序登录论坛(如果需要登录才能访问的版块)

2.访问指定版块

3.遍历帖子(先取指定页,再遍历页面所有帖子的url)

4.循环访问所有帖子url,从帖子页面代码中取种子下载地址(通过正则表达式或第三方页面解析库)

5.访问种子页面下载种子

复制代码 代码如下:
import urllib
import urllib2
import cookielib
import re
import sys
import os

# site is website address | fID is part ID
site = "http://xxx.yyy.zzz/"
source = "thread0806.PHP?fID=x&search=&page="

btSave = "./clyzwm/"
if os.path.isdir(btSave):
 print btSave + " existing"
else:
 os.mkdir(btSave)

log@R_301_6852@ = "./clyzwm/down.log"
error@R_301_6852@ = "./clyzwm/error.log"
suc@R_301_6852@ = "./clyzwm/sucess.log"

headers = {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/32.0.1700.77 Safari/537.36', 
           'Referer' : 'http://xxx.yyy.zzz/'}

def btDown(url,dirPath):
 logger(log@R_301_6852@,"download @R_301_6852@ : " + url)
 try:
  #pageCode = urllib2.urlopen(url).read()
  #print pageCode
  btStep1 = re.findall('http://[\w]+\.[\w]+\.[\w]{0,4}/[\w]{2,6}\.PHP\?[\w]{2,6}=([\w]+)',url,re.I)
  #print btStep1
  if len(btStep1)>0:
   ref = btStep1[0]
   downsite = ""
   downData = {}
   if len(ref)>20:
    downsite = re.findall('http://www.[\w]+\.[\w]+/',url)[0]
    downsite = downsite + "download.PHP"
    reff = re.findall('input\stype=\"hIDden\"\sname=\"reff\"\svalue=\"([\w=]+)\"',urllib2.urlopen(url).read(),re.I)[0]
    downData = {'ref': ref,'reff':reff,'submit':'download'}
   else:
    downsite = "http://www.downhh.com/download.PHP"
    downData = {'ref': ref,'rulesubmit':'download'}
   #print "bt site - " +  downsite + "\n downData:"
   #print downData
   downData = urllib.urlencode(downData)
   downReq = urllib2.Request(downsite,downData)
   downReq.add_header('User-Agent','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/32.0.1700.77 Safari/537.36')
   downPost = urllib2.urlopen(downReq)
   stream = downPost.read(-1)
   if (len(stream) > 1000):
    downPost.close()
    name = btStep1[0]+ ".torrent"
    fw = open(dirPath + name,'w')
    fw.write(stream)
    fw.close()
    logger(suc@R_301_6852@,url+"\n")
   else:
    logger(error@R_301_6852@,url+"\n")
 except urllib2.URLError,e:
  print e.reason

def logger(log@R_301_6852@,msg):
 print msg
 fw = open(log@R_301_6852@,'a')
 fw.write(msg)
 fw.close()

for i in range(1,1000):
 logger(log@R_301_6852@,"\n\n\n@ page " + str(i) + " ...")
 part = site + source + str(i)

 content = urllib2.urlopen(part).read()
 content = content.decode('gbk').encode('utf8')
 #print content

 pages = re.findall('<a\s+href=\"(htm_data/[\d]+/[\d]+/[\d]+\.HTML).*?<\/a>',content,re.I)
 #print pages

 for page in pages:
  page = site + page;
  #logger(log@R_301_6852@,"\n# visiting " + page + " ...")
  pageCode = urllib2.urlopen(page).read()
  #print pageCode
  zzJump = re.findall('http://www.viIDii.info/\?http://[\w]+/[\w]+\?[\w]{2,6}=[\w]+',pageCode)  
  #zzJump = re.findall('http://www.viIDii.info/\?http://[\w/\?=]*',pageCode)
  if len(zzJump) > 0:
   zzJump = zzJump[0]
   #print "- jump page - " + zzJump
   pageCode = urllib2.urlopen(page).read()
   zzPage = re.findall('http://[\w]+\.[\w]+\.[\w]+/link[\w]?\.PHP\?[\w]{2,pageCode)
   if len(zzPage) > 0:
    zzPage = zzPage[0]
    logger(log@R_301_6852@,"\n- zhongzi page -" + zzPage)
    btDown(zzPage,btSave)
   else:
    logger(log@R_301_6852@,"\n. NOT FOUND .")
  else:
   logger(log@R_301_6852@,"\n... NOT FOUND ...")
  zzPage = re.findall('http://[\w]+\.[\w]+\.[\w]+/link[\w]?\.PHP\?ref=[\w]+',pageCode)

总结

以上是内存溢出为你收集整理的python通过urllib2爬网页上种子下载示例全部内容,希望文章能够帮你解决python通过urllib2爬网页上种子下载示例所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://www.outofmemory.cn/langs/1202661.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-04
下一篇 2022-06-04

发表评论

登录后才能评论

评论列表(0条)

保存