抓取豆瓣妹子网站上全部图片并自动保存,豆瓣妹子,刚学BeautifulS


刚学BeautifulSoup,试着做点实事。

图片全部来源于 http://dbmeizi.com

我的环境是,Python 2.7.6,BS4,在powershell或命令行均可运行。请确保安装了BS模块

没有任何编程基础,刚学点python,希望给同样刚学python的小伙伴们一点信心~~

# -*- coding:utf8 -*-# 抓取dbmei.com的图片。from bs4 import BeautifulSoupimport os, sys, urllib2# 创建文件夹,昨天刚学会path = os.getcwd()                           # 获取此脚本所在目录new_path = os.path.join(path,u'豆瓣妹子')if not os.path.isdir(new_path):    os.mkdir(new_path)def page_loop(page=0):    url = 'http://www.dbmeizi.com/?p=%s' % page    content = urllib2.urlopen(url)    soup = BeautifulSoup(content)    my_girl = soup.find_all('img')       # 加入结束检测,写的不好....    if my_girl ==[]:        print u'已经全部抓取完毕'        sys.exit(0)    print u'开始抓取'    for girl in my_girl:        link = girl.get('src')        flink = 'http://www.dbmeizi.com/' + link        print flink        content2 = urllib2.urlopen(flink).read()        with open(u'豆瓣妹子'+'/'+flink[-11:],'wb') as code:   #在OSC上现学的            code.write(content2)    page = int(page) + 1    print u'开始抓取下一页'    print 'the %s page' % page    page_loop(page)page_loop()#该片段来自于http://byrx.net

评论关闭