icon.svg
Coin

1

free
new

Reddit Scraper

Extract 40+ data points from any Reddit subreddit, post, user or search — posts, comments, authors & subreddit metadata all in one run.

Download sample
users2 users
runs5 runs

Trusted worldwide by the best

Export results from Reddit — right now

Extract

Paste any mix of Reddit subreddit, post, user profile or search URLs and extract 40+ data points per result — covering posts, comments, authors and subreddit metadata.

feature1.svg

Schedule

Automate Instagram data collection on daily, weekly, or monthly schedules.

image

Export

Export all Reel data to Google Sheets, email, or S3 — fully automated.

image

33 data attributes per result

Every scrape returns structured data you can export as CSV, JSON, or via API.

url

Canonical URL of the Reddit post

Export
Post Id

Reddit post ID (base36 identifier)

1stfrij
Export
title

Title of the Reddit post

Pourquoi y a-t-il si peu de femmes en informatique ?
Export
body

Text body of the post (for text posts)

Je bosse dans une ESN, ~1000 personnes. On est facilement à 90% d'hommes dans les équipes tech.
Export
Post Type

Type of post: text, image, link, video, etc.

text
Export
score

Upvote score of the post

19
Export
Comment Count

Number of comments on the post

126
Export
Award Count

Number of awards received by the post

0
Export
author

Username of the post author

Independent_Lynx715
Export
Author Id

Reddit ID of the post author

t2_19xe8ly1c0
Export
subreddit

Name of the subreddit

developpeurs
Export
Subreddit Prefixed Name

Subreddit name with r/ prefix

r/developpeurs
Export
Subreddit Id

Reddit ID of the subreddit

t5_3oxtd
Export
flair

Post flair label (if any)

Carrière
Export
domain

Domain of the post content (self.xxx for text posts, or external domain for link posts)

self.developpeurs
Export
language

Detected language of the post

fr
Export
Created At

UTC timestamp when the post was created

2026-04-23T11:16:27
Export
type

Row type: 'post' for the post itself, 'comment' for a comment, 'user' for a user profile, 'subreddit' for subreddit metadata

post
Export
Comment Id

Reddit comment ID (base36). Null for post rows.

ohsv4v5
Export
Parent Id

Parent comment ID for nested replies. Null for top-level comments and post rows.

ohsv4v5
Export
depth

Comment nesting depth (0 = top-level). Null for post rows.

0
Export
karma

Total karma of the user. Populated for type='user' rows only.

87
Export
Post Karma

Post karma of the user. Populated for type='user' rows only.

17
Export
Comment Karma

Comment karma of the user. Populated for type='user' rows only.

70
Export
contributions

Number of contributions by the user. Populated for type='user' rows only.

7
Export
trophies

JSON array of trophy names earned by the user. Populated for type='user' rows only.

[
  "Five-Year Club"
]
Export
Active Subreddits

JSON array of subreddits the user is active in (prefixed, e.g. 'r/Bitcoin'). Populated for type='user' rows only.

[
  "r/Bitcoin",
  "r/PostCardExchange"
]
Export
Moderated Subreddits

JSON array of subreddits the user moderates (prefixed, e.g. 'r/belikeme'). Populated for type='user' rows only.

[
  "r/belikeme"
]
Export
Subreddit Description

Description of the subreddit. Populated for type='subreddit' rows only.

A community dedicated to sharing...
Export
Weekly Active Users

Number of weekly active users in the subreddit. Populated for type='subreddit' rows only.

1208799
Export
Weekly Contributions

Number of weekly contributions (posts + comments) in the subreddit. Populated for type='subreddit' rows only.

18683
Export
Subreddit Rules

JSON array of subreddit rules, each with 'number', 'title', and 'description'. Populated for type='subreddit' rows only.

[
  {
    "number": "1",
    "title": "Be civil",
    "description": "..."
  }
]
Export
Subreddit Resources

JSON array of community resource links, each with 'text' and 'url'. Populated for type='subreddit' rows only.

[
  {
    "text": "Wiki",
    "url": "https://www.reddit.com/r/example/wiki/…
  }
]
Export
image

Fast

280 results per minute
Lightning-fast performance

image

Solid

99.95% task success rate
Dependable, reliable data every time

image

Cost-competitive

$1 per 1000 results
Top affordability worldwide

Simulator plan

Trusted by the best

Used obsessively by the data-hungry around the world

★★★★★
★★★★★
★★★★★
★★★★★
★★★★★
★★★★★

Frequently Asked Questions

Is it legal to scrape Reddit?

Yes — scraping public Reddit data is legal. Posts, comments, subreddit pages and user profiles you can see without logging in are public information. For the full breakdown, read our legal guide.

Do I need a Reddit account to use the scraper?

No login required. The Reddit Scraper works on public data only — no account sync, no cookies, no Chrome extension. Just paste URLs and run.

What data does the Reddit Scraper extract?

40+ data points per result. You get post titles, body text, scores, comment counts, authors, comment threads with depth and parent IDs, user karma and trophies, subreddit descriptions, rules, weekly active users and more — covering posts, comments, users and subreddits in a single run.

How fast is the Reddit Scraper?

Up to 280 results per minute. Run multiple subreddits, posts and searches in parallel to scale up further.

What inputs does the scraper accept?

Any mix of Reddit URLs. Drop in subreddit URLs, post URLs, user profile URLs or Reddit search URLs — the scraper auto-detects the type and extracts accordingly. No need to split runs by input type.

Does it scrape comments and replies?

Yes — full comment threads, replies included. Each comment comes with its body, author, score, depth, parent ID and post ID, so you can rebuild the complete conversation tree.

Can I scrape Reddit user profiles?

Yes. Paste a user URL (e.g. reddit.com/user/username) and you'll get post karma, comment karma, trophies, active subreddits, moderated subreddits and account creation date.

How many posts or comments can I scrape per run?

No hard limit. Scrape thousands of posts and comments in a single run — pagination is handled automatically. Scale by running multiple URLs in parallel or scheduling recurring runs.

Can I schedule recurring Reddit scrapes?

Yes — schedule runs hourly, daily or weekly. Perfect for monitoring subreddits, tracking new posts on keywords or building a continuously updated dataset.

What export formats are supported?

CSV, JSON and direct push to Google Sheets — plus the lobstr.io API for piping data into your own stack. Connect Google Sheets once and every run lands in your spreadsheet automatically.

Ready to get started?

Export your first results for free.

Contact sales