`
yexin218
  • 浏览: 960094 次
  • 性别: Icon_minigender_1
  • 来自: 珠海
社区版块
存档分类
最新评论

Python百度空间备份改进版1

阅读更多

Python备份百度博客 在此功能上做些代码优化,性能还有待...

'''
Created on Apr 23, 2010

@author: Leyond
'''
import urllib
from BeautifulSoup import BeautifulSoup
import re


def saveToFile(dir, htmlContent, title,url=""):
    nFail = 0
    dir +="/%s" % (url)
    #print dir
    while nFail < 1:
        try:
            myfile = open(dir, 'w')
            myfile.write("<html><head><title>"+str(title)+"</title></head><body>"+str(htmlContent)+"</body></html>")
            myfile.close()
            return
        except:
            nFail += 1
            print "%s download Fail." % (title)


def findNextBlogHtml(user,htmlContent):
    urls = re.findall(r"var.*pre.*?/blog/item/.*?html",htmlContent,re.I)
    if(len(urls)==1):
        blogUrl = re.findall(r"/blog/item/\w*.html",urls[0],re.I)
        print blogUrl[0]
        if(len(blogUrl[0])>17):
            htmlAddr = blogUrl[0][11:]
            #print htmlAddr
        else:
            htmlAddr ="None"  
    else:
        htmlAddr ="None"
    return htmlAddr

def getBlogContentAndTitle(user,htmlUrl):
    blogUrl="http://hi.baidu.com/" + user+"/blog/item/"+htmlUrl
    sock = urllib.urlopen(blogUrl)
    blogHtmlContent = sock.read()
    sock.close()
    htmlContent = unicode(blogHtmlContent,'gb2312','ignore').encode('utf-8','ignore')
    # parser the html content
    htmlsoup = BeautifulSoup(htmlContent)
    blogContentBlock = htmlsoup.findAll("div",{"id":"m_blog"})

    blogContentBlockZero = blogContentBlock[0].findAll("table",{"style":"table-layout:fixed;width:100%"})

    #get the title
    blogTitleZero = blogContentBlock[0].findAll("div",{"class":"tit"})
    blogTitle = blogTitleZero[0].string

    #get blog publish date
    blogPublishDate = blogContentBlock[0].findAll("div",{"class":"date"})
    blogDate = blogPublishDate[0].string
    blogData =str("<B>"+blogDate+"</B>") + str(blogContentBlockZero[0])
    return blogData,blogTitle,htmlContent

def backUpBlog(user,firstBlogUrl ):
    #first read first blog's title and content
    blogContent, blogTitle,htmlContent = getBlogContentAndTitle(user,firstBlogUrl)

    #save the html to file
    saveToFile(user,blogContent,blogTitle,firstBlogUrl)
    #find next url
    firstBlogUrl = findNextBlogHtml(user,htmlContent)

    if firstBlogUrl != "None" :
        backUpBlog(user,firstBlogUrl)
    else:
        print "Backup Finished"
    
    
backUpBlog(user="wbweast",firstBlogUrl= "235bf024035c721b8b82a1c6.html") 
    

    

 使用方法跟第一篇相同: 用之前,需要在文件所在目录新建一个目录,例如我的博客就是 codedeveloper,使用这段程序,需要更改两个参数:

 

其中user那里指的是你的用户名,firstBlogUrl说的是你最 新那篇博文的地址~

有个问题:如何支持中文目录呢?

分享到:
评论
2 楼 yexin218 2010-04-24  
凌绿寒绮 写道
呵呵 给我写个空间备份的啊 ^_^

QQ空间需要登入才能查看的,估计有点难
1 楼 凌绿寒绮 2010-04-24  
呵呵 给我写个空间备份的啊 ^_^

相关推荐

Global site tag (gtag.js) - Google Analytics