Web Scraping with Scala-#2

Updated:
Categories: Scala
Tags: #Scala #Web crawler

Continued Web Scraping with Scala-#1

Requirement

  1. Scala scraper(A Scala library for scraping content from HTML pages) 사용
  2. Get URL에 대한 Exception Case 처리

Approach

Import Library

라이브러리 import를 편하게 하기 위해서 sbt 프로젝트 생성. 만들어둔 start-kit 사용

//in build.sbt
libraryDependencies ++= Seq(
  "net.ruippeixotog" %% "scala-scraper" % "1.2.0"
)

Use Scala Scraper

Scala scraper에서는 Jsoup이라는 Java HTML parser library를 사용.

Extends JsoupBrowser

스칼라의 trait을 사용하여 JsoupBrowser 객체의 Custom 버전을 손쉽게 확장할 수 있음.

Result

//in Main.scala sbt
import net.ruippeixotog.scalascraper.browser.JsoupBrowser
import net.ruippeixotog.scalascraper.model.Document

object Scraper2 extends App {

  trait CustomBrowser extends JsoupBrowser {
    case class ErrorMessage( statusCode: Int, message: String, url: String )

    val browser = JsoupBrowser()

    def getDocumentFromUrl( url: String ): Either[ErrorMessage, Document] = {
      try {
        Right(browser.get(url))
      } catch {
        case e: org.jsoup.HttpStatusException => Left(ErrorMessage(e.getStatusCode, e.getMessage, e.getUrl))
        case e: org.jsoup.SerializationException => Left(ErrorMessage(400, e.getMessage, url))
        case e: org.jsoup.UnsupportedMimeTypeException => Left(ErrorMessage(415, e.getMessage, e.getUrl))
      }
    }
  }
  object browser extends CustomBrowser

  val url = "http://jungbin.kim"
  val doc = browser.getDocumentFromUrl( url )

  println(doc)
}

Comments