geonode logo

How to Scrape A Thought Experiment

Explore the complexities and challenges of scraping, a popular social media platform, without actually doing it. Learn how Geonode proxies and scraper API can make the process smoother.

Maricor Bunal

by Maricor Bunal

September 7, 2023

LIHKG is a popular online discussion forum that has gained significant traction in Hong Kong and among Cantonese-speaking communities worldwide.

It's one of the biggest forums for the discussion of protests, social unrest events, and more.

Often compared to Reddit in competitor analysis, this social media platform allows forum users to discuss a wide range of topics, from politics and current events to entertainment and lifestyle.

The forum history reveals a strong focus on collective action and large-scale protests.

With its user-generated social media content and real-time online discussions, serves as a valuable resource for researchers, marketers, and data analysts interested in understanding public opinion, social networks, and social trends.

The Complexities of Scraping

While the idea of scraping might seem straightforward, the reality is far from it.

The website employs a variety of mechanisms to protect its well-structured dataset, ranging from CAPTCHAs to AJAX requests, making it a challenging target for web scraping and unrest event detection networks.

Moreover, ethical and legal considerations, including the ethics of solidarity, add another layer of complexity to the process.

Understanding these intricacies is crucial for anyone considering scraping this or similar online platforms.

It's not just about collecting social media data; it's about doing it responsibly and effectively.

Hypothetical Tools: Geonode Proxies and Scraper API

In a hypothetical scenario where one would scrape, certain tools could potentially make the process smoother.

Geonode proxies could be used to bypass IP-based restrictions, allowing for more extensive data collection without triggering anti-scraping measures. This is particularly useful for handling activity cascades.

Additionally, Geonode's scraper API could automate and simplify the scraping process, handling challenges like CAPTCHAs and AJAX requests more efficiently.

This would be beneficial in actual time, especially when dealing with a high degree of solidarity among forum users.

Who Should Read This Article?

If you've ever wondered how to scrape, this thought experiment is for you.

This article aims to shed light on the complexities and challenges you would face in scraping — without actually teaching you how to do it.

So, let's look into this fascinating subject and explore what makes it so challenging yet intriguing.

The Intricacies of

The Structure of is primarily built using a combination of HTML, CSS, and JavaScript.

While HTML and CSS handle the layout and styling, JavaScript is responsible for the dynamic aspects of the site, such as real-time updates and AJAX requests.

Why Structure Matters for Scraping

Understanding the structure is the first step in figuring out how to scrape

For instance, if the website heavily relies on JavaScript to load content, traditional scraping methods that only fetch the HTML might not work.

You'd need to use more advanced techniques like headless browsers to execute the JavaScript code and retrieve the data.

Challenges in Scraping Dynamic Content employs AJAX (Asynchronous JavaScript and XML) to load new data without refreshing the entire page.

This makes it challenging to scrape the site using basic HTTP requests.

In a hypothetical scenario, one could use Geonode's scraper API to handle such dynamic content more efficiently.

Data Types on

What Can You Find? offers a variety of content types that could be of interest to different parties. Here are some of the primary data types you might consider scraping:

  • User Posts. Original posts made by users on various topics, these serve as the starting point for discussions and can contain text, images, and links.

  • Comments: Comments are the responses to user posts, and can have nested replies, forming a thread-like structure.

  • Upvotes and Downvotes. Each post and comment can be upvoted or downvoted, providing an indication of the community's sentiment towards the content.

  • User Profiles. These may contain information like the user's join date, activity level, and other stats, although much of this data is often anonymized or pseudonymized.

Why This Data Matters

Market Research

Understanding what users are talking about can provide valuable insights into consumer behavior and trends. This is particularly useful for competitor analysis and visualization analysis.

Sentiment Analysis

The upvotes and downvotes can be used to gauge public opinion on specific topics or events, including protest campaigns and social unrest events.

Academic Research

Scholars studying online communities may find the user interactions and discussions on to be a rich source of data for their well-structured dataset.

Challenges in Scraping

Scraping a website like is not just a technical endeavor; it's also a legal and ethical maze.

The Legal Landscape

Web scraping operates in a somewhat gray area of the law.

While scraping publicly accessible posts is generally considered legal, many websites have terms of service that prohibit scraping.

Violating these terms can lead to legal consequences, including lawsuits and fines.

How It Applies to has its own set of terms and conditions that users must agree to when using the site.

These terms often include clauses that prohibit the unauthorized scraping of the website's content.

Therefore, scraping without explicit permission could potentially lead to legal repercussions.

Ethical Implications

Beyond the legal aspects, scraping also raises ethical questions.

The platform is a community where people share their thoughts and opinions, often expecting a certain level of privacy.

Scraping this data without consent could be considered an invasion of privacy, even if the data is publicly accessible.

Technical Barriers


One of the most common anti-scraping measures employed by websites is the CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). uses CAPTCHAs to ensure that the user interacting with the website is human and not a bot. This presents a significant hurdle for any scraping endeavor.

AJAX Requests

As mentioned earlier, uses AJAX (Asynchronous JavaScript and XML) to load content dynamically.

This means that the data you see on the website is not present in the initial HTML but is loaded asynchronously through JavaScript.

Traditional scraping methods that only fetch the HTML will not be able to capture this dynamically loaded data.

IP-based Restrictions

Websites often employ IP-based restrictions to block or limit access from specific geographic locations or to prevent scraping activities.

Multiple requests from the same IP address in a short period can trigger these restrictions.

Hypothetical Tools for the Job

Geonode Proxies as a Hypothetical Solution

In a hypothetical scenario where one would attempt to scrape, Geonode proxies could serve as a solution to overcoming IP-based restrictions.

By routing your requests through multiple IP addresses, you could avoid triggering anti-scraping measures, thereby allowing for more extensive data collection.

The Role of Proxies

In web scraping, a proxy acts as an intermediary between your computer and the website you're trying to scrape.

It forwards your requests to the website and returns the website's responses back to you, effectively masking your IP address in the process.

Geonode Proxies for Geographical Restrictions

Geonode offers a range of residential proxies that can be used to bypass geographical restrictions.

For instance, if were to restrict access to users from certain countries, Geonode proxies could route your requests through IP addresses located in regions that are not restricted, thereby allowing you to access the site.

Overcoming Rate Limits with Geonode Proxies

Rate limiting is another common anti-scraping measure that restricts the number of requests a single IP address can make within a given time frame.

Geonode proxies could help you overcome this by distributing your requests across multiple IP addresses, making it less likely that you'll hit rate limits.

The Scraper API

A scraper API is a tool that simplifies the web scraping process by handling many of the challenges associated with it, such as CAPTCHAs, AJAX requests, and rate limits.

It essentially acts as a middleman that takes care of the nitty-gritty details, allowing you to focus on what you do with the data once it's been scraped.

How Geonode's Scraper API Could Simplify the Process

Geonode's scraper API is designed to handle many of the challenges that you would face when scraping a complex website like It can handle retries in case of failures, making the entire process more efficient and less prone to errors.

Rethinking the Approach

Geonode's scraper API makes you rethink how to scrape

Instead of spending countless hours figuring out how to bypass CAPTCHAs or handle AJAX requests, you could leverage the API to handle these challenges, allowing you to focus on analyzing the data and deriving insights from it.

People Also Ask

What is is a popular forum in Hong Kong and among global Cantonese speakers, covering topics from politics to pop culture.

Often likened to Reddit, it's a key source for tracking public sentiment and trends, including news values and alternative news.

Is it Legal to Scrape Websites?

The legality of web scraping varies depending on jurisdiction and the specific circumstances.

Generally, scraping publicly accessible information is considered legal. However, many websites, including, have terms of service that prohibit unauthorized scraping.

Violating these terms can lead to legal consequences.

How Do Proxies Work in Web Scraping?

In web scraping, a proxy serves as an intermediary between your computer and the target website. It forwards your requests to the website and returns the website's responses back to you.

This process effectively masks your IP address, making it harder for websites to identify and block you.

What is a Scraper API?

A scraper API is a service that simplifies the web scraping process by handling many of the challenges associated with it.

Geonode's scraper API could hypothetically manage these aspects for you, allowing you to focus on the data and how you intend to use it.

Wrapping Up

As we come to the end of this thought experiment, it's clear that scraping—or any website, for that matter—is not a straightforward task.

The process is fraught with complexities that range from understanding the website's structure to navigating legal and ethical mazes.

Scraping presents a unique set of challenges:

  • Technical Barriers. The website's use of AJAX for dynamic content loading and CAPTCHAs to deter bots makes scraping more complicated than simply fetching HTML content.

  • Legal and Ethical Concerns. The terms of service of explicitly prohibit unauthorized scraping, and there are ethical considerations around user privacy and data usage.

  • Data Types. Understanding the types of data available on, such as user posts, comments, and upvotes, adds another layer of complexity to the scraping process.

Final Thoughts

Understanding the complexities involved in scraping a website like is crucial for anyone considering such an endeavor.

While tools exist that could hypothetically make the task easier, they cannot eliminate the legal and ethical responsibilities that come with web scraping.

Therefore, it's essential to approach the task with a comprehensive understanding of these complexities and challenges.

By exploring these aspects in detail, we hope this thought experiment has provided valuable insights into the world of web scraping.

Being aware of these challenges will better equip you to navigate the intricate landscape of web scraping responsibly and effectively.