面向对象编程是程序设计中一种重要且高效的编程规范,它区别于常见的面向过程编程。在R语言以及Python的程序包开发过程中,大量使用了面向对象的编程范式。
百度百科关于面向对象编程的权威解释是:
面向对象程序设计(英语:Object-oriented programming,缩写:OOP)是一种程序设计范型,同时也是一种程序开发的方法。其最重要的三大特征是封装、继承、多态。
对象指的是类的实例。它将对象作为程序的基本单元,将程序和数据封装其中,以提高软件的重用性、灵活性和扩展性。
R语言中的面向对象编程是通过泛型函数来实现的,R语言中现有的S3类、S4类、以及R6类等都可以实现面向对象的编程规范。
如果你还对R语言的S3、S4类不熟悉,可以参考张丹老师的这几篇博客:
http://blog.fens.me/r-object-oriented-intro/
http://blog.fens.me/r-class-s3/
http://blog.fens.me/r-class-s4/
张丹老师的这几篇文章详细的介绍了R语言中S3类、S4类面向对象的实现。
以下我将之前一篇介绍多进程/多线程的案例改造成基于S3、S4类的面向对象模式。
library("RCurl")
library("XML")
library("magrittr")
因为我们的任务是抓取天善智能主页上大数据相关的职位信息,所以类定义为GetData,而后仅仅定义了一个可调用的方法——hellobi(类中可以定义的方法调用可以有很多个。)
GetData <- function(object) UseMethod("GetData")
GetData.hellobi <- function(object){
d <- debugGatherer()
handle <- getCurlHandle(debugfunction=d$update,followlocation=TRUE,cookiefile="",verbose = TRUE)
while (object$i < 10){
object$i = object$i+1
url <- sprintf("https://www.hellobi.com/jobs/search?page=%d",object$i)
tryCatch({
content <- getURL(url,.opts=list(httpheader=object$headers),.encoding="utf-8",curl=handle) %>% htmlParse()
job_item <- content %>% xpathSApply(.,"//div[@class='job_item_middle pull-left']/h4/a",xmlValue)
job_links <- content %>% xpathSApply(.,"//div[@class='job_item_middle pull-left']/h4/a",xmlGetAttr,"href")
job_info <- content %>% xpathSApply(.,"//div[@class='job_item_middle pull-left']/h5",xmlValue,trim = TRUE)
job_salary <- content %>% xpathSApply(.,"//div[@class='job_item-right pull-right']/h4",xmlValue,trim = TRUE)
job_origin <- content %>% xpathSApply(.,"//div[@class='job_item-right pull-right']/h5",xmlValue,trim = TRUE)
myreslut <- data.frame(job_item,job_links,job_info,job_salary,job_origin,stringsAsFactors = FALSE)
object$fullinfo <- rbind(object$fullinfo,myreslut)
cat(sprintf("第【%d】页已抓取完毕!",object$i),sep = "\n")
},error = function(e){
cat(sprintf("第【%d】页抓取失败!",object$i),sep = "\n")
})
Sys.sleep(runif(1))
}
cat("all page is OK!!!")
return (object$fullinfo)
}
initialize <- list(
i = 0,
fullinfo = data.frame(),
headers = c(
"Referer"="https://www.hellobi.com/jobs/search",
"User-Agent"="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36"
)
)
mywork <- structure(initialize, class = "hellobi")
mydata <- GetData(mywork)
当然你也可以在GetData类下定义多个方法,比如抓取课程信息,抓取博客文章信息等等。仅需将实例绑定到对应的方法上,那么在类中传入实例之后,类便可以自动搜寻到该实例的方法,并自动执行该实例对应方法的函数调用,R语言中的summary、plot、print函数等都是通过这种泛型函数的模式来实现的。
使用基于S4类的方法来实现以上案例的面向对象模式
initialize <- list(
i = 0,
fullinfo = data.frame(),
headers = c(
"Referer"="https://www.hellobi.com/jobs/search",
"User-Agent"="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36"
)
)
setClass("GetData",
slots = c(
i="numeric",
fullinfo="data.frame",
headers="character"
),
prototype = initialize
)
GetData <- new('GetData')
setGeneric('hellobi',
function(object) {
standardGeneric('hellobi')
}
)
setMethod('hellobi', 'GetData',
function(object) {
d <- debugGatherer()
handle <- getCurlHandle(debugfunction=d$update,followlocation=TRUE,cookiefile="",verbose = TRUE)
while (object@i < 10){
object@i = object@i+1
url <- sprintf("https://www.hellobi.com/jobs/search?page=%d",object@i)
tryCatch({
content <- getURL(url,.opts=list(httpheader=object@headers),.encoding="utf-8",curl=handle) %>% htmlParse()
job_item <- content %>% xpathSApply(.,"//div[@class='job_item_middle pull-left']/h4/a",xmlValue)
job_links <- content %>% xpathSApply(.,"//div[@class='job_item_middle pull-left']/h4/a",xmlGetAttr,"href")
job_info <- content %>% xpathSApply(.,"//div[@class='job_item_middle pull-left']/h5",xmlValue,trim = TRUE)
job_salary <- content %>% xpathSApply(.,"//div[@class='job_item-right pull-right']/h4",xmlValue,trim = TRUE)
job_origin <- content %>% xpathSApply(.,"//div[@class='job_item-right pull-right']/h5",xmlValue,trim = TRUE)
myreslut <- data.frame(job_item,job_links,job_info,job_salary,job_origin,stringsAsFactors = FALSE)
object@fullinfo <- rbind(object@fullinfo,myreslut)
cat(sprintf("第【%d】页已抓取完毕!",object@i),sep = "\n")
},error = function(e){
cat(sprintf("第【%d】页抓取失败!",object@i),sep = "\n")
})
Sys.sleep(runif(1))
}
cat("all page is OK!!!")
return (object@fullinfo)
}
)
###执行类中的方法:
mydata1 <- hellobi(GetData)
DT::datatable(mydata1)
关于S3方法与S4方法之间的区别:
Python:
from urllib.request import urlopen,Request
import pandas as pd
import time
from lxml import etree
class GetData:
#初始化参数
def __init__(self):
self.start = 0
self.headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36',
'Referer':'https://www.hellobi.com/jobs/search'
}
self.myresult = {
"job_item":[],
"job_links":[],
"job_info":[],
"job_salary":[],
"job_origin":[]
}
#定义可用方法:
def getjobs(self):
while self.start < 10:
self.start +=1
url = "https://www.hellobi.com/jobs/search?page={}".format(self.start)
try:
pagecontent=urlopen(Request(url,headers=self.headers)).read().decode('utf-8')
result = etree.HTML(pagecontent)
self.myresult["job_item"].extend(result.xpath('//div[@class="job_item_middle pull-left"]/h4/a/text()'))
self.myresult["job_links"].extend(result.xpath('//div[@class="job_item_middle pull-left"]/h4/a/@href'))
self.myresult["job_info"].extend([ text.xpath('string(.)').strip() for text in result.xpath('//div[@class="job_item_middle pull-left"]/h5')])
self.myresult["job_salary"].extend(result.xpath('//div[@class="job_item-right pull-right"]/h4/span/text()'))
self.myresult["job_origin"].extend(result.xpath('//div[@class="job_item-right pull-right"]/h5/span/text()'))
print("正在抓取第【{}】页".format(self.start))
except:
print("第【{}】页抓取失败".format(self.start))
time.sleep(1)
print("everything is OK")
return pd.DataFrame(self.myresult)
if __name__ == "__main__":
t0 = time.time()
mydata = GetData()
myresult = mydata.getjobs()
t1 = time.time()
total = t1 - t0
print("消耗时间:{}".format(total))
以上便是在R语言和Python中使用面向对象编程的模式所做的爬虫写程序,仅作为学习面向对象编程思维的实战案例,至于更为详尽的关于R语言和Python中面向对象的思维及其高阶应用,还需要各位小伙伴儿参考各大主流加载包的源码,比如R语言的ggplot2包、rvest包等内部大量使用基于S3类的编程模式,Python中的主流加载库也都是如此。 往期案例数据请移步本人GitHub: https://github.com/ljtyduyu/DataWarehouse/tree/master/File