the rep system is shit

  • Thread starter Deleted member 2717
  • Start date
D

Deleted member 2717

Chosen undead
Sep 19, 2024
108
the rep system literally has no meaning nor importance to most forums. the most common way to farm reps is to post mid-low quality threads that appeal to the masses like "look at this random blackpill video. very relatable". or circle jerking and updooting people who updoot back. people will use the rep count or post to rep ratio as an argument to state superiority while having the mental capacity of a sponge, reps are not a sign of a high iq poster. i prefer anonymous imageborads as it dosent have the rep faggotry. aboloshing the rep system would lead to a better forum expirence.
 

Attachments

  • steamuserimages-a.akamaihd.jpg
    steamuserimages-a.akamaihd.jpg
    16.8 KB · Views: 6
Pale God

Pale God

Genius
Feb 21, 2023
6,183
the rep system literally has no meaning nor importance to most forums. the most common way to farm reps is to post mid-low quality threads that appeal to the masses like "look at this random blackpill video. very relatable". or circle jerking and updooting people who updoot back. people will use the rep count or post to rep ratio as an argument to state superiority while having the mental capacity of a sponge, reps are not a sign of a high iq poster. i prefer anonymous imageborads as it dosent have the rep faggotry. aboloshing the rep system would lead to a better forum expirence.
I don't even get why you would wanna spend a lot of time on forums anyway. I just do it when I am bored and out of energy to other things. It's like my daily replacement for socializing.
 
RNT

RNT

Eternal Night
Aug 23, 2023
1,871
people will use the rep count or post to rep ratio as an argument to state superiority
All high effort posters on this forum have a positive rep, besdides self-proclaimed genuises like @fries. Because his quirck was attacking users and expecting back - what? If he were writing for a newspaper, he would be declared "toxic" and fired, or put on a far, far away job; for small pay.

Reps give immidiate feedback: send an 🤢 and the guy is about to lose his sanity. Send it 100 times and it goes without saying, he's finished.

This forum is small and you can bend the rep system to your liking - reward high quality posts, ignore spammy ones. You have already figured out how to updoot, the one step left is parsing the threads. I can even share the code if you want.
 
D

Deleted member 2717

Chosen undead
Sep 19, 2024
108
All high effort posters on this forum have a positive rep, besdides self-proclaimed genuises like @fries. Because his quirck was attacking users and expecting back - what? If he were writing for a newspaper, he would be declared "toxic" and fired, or put on a far, far away job; for small pay.

Reps give immidiate feedback: send an 🤢 and the guy is about to lose his sanity. Send it 100 times and it goes without saying, he's finished.

This forum is small and you can bend the rep system to your liking - reward high quality posts, ignore spammy ones. You have already figured out how to updoot, the one step left is parsing the threads. I can even share the code if you want.
what code in exact?
 
RNT

RNT

Eternal Night
Aug 23, 2023
1,871
what code in exact?
Well, functions for parsing the forum and then sending updoots.

E.g. the Python function for reading latest posts:
Python:
import requests
from bs4 import BeautifulSoup

def whats_new():
    fd = str(requests.get('https://neets.net/whats-new/posts/').content.decode("utf-8"))

    latest_ids = []
    soup = BeautifulSoup(fd,'html.parser')
    links = soup.find_all("li", {"class": "structItem-startDate"})
    for link in links:
        sl = link.a['href']
        print(sl)
        try:
            latest_id = int(sl.split('.')[1].replace('/',''))
        except:
            latest_id = 1
        latest_ids.append(latest_id)

    return latest_ids


latest_ids = whats_new()
for latest in latest_ids:
    print("Latest id:",latest)

You can fine-tune the exact routine.

It's not hard to write it yourself, the one possible bottleneck is performing a "log-in".
 
D

Deleted member 2717

Chosen undead
Sep 19, 2024
108
Well, functions for parsing the forum and then sending updoots.

E.g. the Python function for reading latest posts:
Python:
import requests
from bs4 import BeautifulSoup

def whats_new():
    fd = str(requests.get('https://neets.net/whats-new/posts/').content.decode("utf-8"))

    latest_ids = []
    soup = BeautifulSoup(fd,'html.parser')
    links = soup.find_all("li", {"class": "structItem-startDate"})
    for link in links:
        sl = link.a['href']
        print(sl)
        try:
            latest_id = int(sl.split('.')[1].replace('/',''))
        except:
            latest_id = 1
        latest_ids.append(latest_id)

    return latest_ids


latest_ids = whats_new()
for latest in latest_ids:
    print("Latest id:",latest)

You can fine-tune the exact routine.

It's not hard to write it yourself, the one possible bottleneck is performing a "log-in".
sure, i don't see why not
 
RNT

RNT

Eternal Night
Aug 23, 2023
1,871
sure, i don't see why not
Creating a database:
Python:
import sqlite3
import os.path

def db_start():
    file_bool = os.path.exists('sentiment.db')
    if file_bool == False:

        con = sqlite3.connect("sentiment.db")
        cur = con.cursor()
        cur.execute("CREATE TABLE neets(thread_id,username,url,title,post_id,post_text,reacts,react_type,react_givers,quote_ids,quote_authors,quote_texts,date,epoch,is_op)")
        con.commit()
        cur.close()
        con.close()
        print("DB Created")
        limit()
    else:
        print("DB already exists, skip creating it")

Parsing the thread and inserting data into the database:
Python:
import json
import sqlite3
from bs4 import BeautifulSoup
from time import sleep

def parse(thread_id):
    url = "https://neets.net/threads/" + str(thread_id)
    fd = str(requests.get(url).content.decode("utf-8"))
    fd = fd.replace('\n', ' ').replace('\r', '')

    soup = BeautifulSoup(fd,'html.parser')

    title = soup.title.text.split('|')[0][:-1]
    print('Title:',title)

    num = len(soup.find_all("div", {"class": "bbWrapper"}))

    d = {}
    d_list = []

    con = sqlite3.connect("sentiment.db")
    cur = con.cursor()

    for g in range(num):
        post_data = soup.find_all("article", {"class": "message message--post js-post js-inlineModContainer"})[g]
        username = str(post_data['data-author'])
        post_id = int(post_data['data-content'].split('-')[1])
        signature = post_data.find("h5", {"class": "userTitle message-userTitle"}).text

        user_id = avi_block['data-user-id']

        try:
            join_date = post_data.find("dd").text
        except:
            join_date = "0"

        time_block = post_data.find("time", {"class": "u-dt"})
        date = time_block['datetime']
        epoch = time_block['data-time']

        text_block = post_data.find("div", {"class": "message-content js-messageContent"})

        quote_ids = []
        quote_authors = []
        quote_texts = []

        quote = text_block.find_all("div", {"class": "bbCodeBlock-expandContent js-expandContent"})
        if quote is None:
            pass ## Post without quotes
        else:
            counter = 0
            for q in quote: ## Iterating over quotes
                quote_text = q.text
                if quote_text[0] == ' ':
                    quote_text = quote_text[1:]

                if quote_text[-1:] == ' ':
                    quote_text = quote_text[:-1]
                quote_source = text_block.find_all("blockquote", {"class": "bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"})[counter]
                counter = counter + 1
                try:
                    quote_id = int(quote_source['data-source'].split(' ')[1])
                    quote_author = quote_source['data-quote']
                except:
                    quote_id = 0
                    quote_author = "0"

                quote_ids.append(quote_id)
                quote_authors.append(quote_author)
                quote_texts.append(quote_text)

        post_text = post_data.find("div", {"class": "bbWrapper"})
        while post_text.find('blockquote'): ## Will strip all quotes
            post_text.blockquote.decompose()

        yt = post_text.find("span", {"data-s9e-mediaembed-c2l": "youtube"})
        if yt:
            yt_link = "https://youtube.com/watch?v=" + yt['data-s9e-mediaembed-iframe'].split(',')[7].split('/')[4].split("?")[0]

        post_text = post_text.text
        if post_text[0:4] == "    ":
            post_text = post_text[4:]

        if yt:
            if len(post_text) == 0:
                post_text = yt_link
            else:
                post_text = post_text + "\n" + yt_link

        footer = post_data.find("footer", {"class": "message-footer"})
        reacts_block = footer.find("ul", {"class": "reactionSummary"})
        react_type = []
        react_givers = []
        if reacts_block:
            for i in reacts_block.find_all("img"):
                react_type.append(i['alt'])
      
            for b in footer.find_all("bdi"):
                react_givers.append(b.text)

            if len(react_givers) == 3:
                rs =  footer.find("a", {"class": "reactionsBar-link"}).text
                if " other" in rs:
                    st = rs.find(" and ")
                    unknown_reacts = int(rs[st+5:].split(' ')[0])
                    for a in range(unknown_reacts):
                        react_givers.append("un__u")

        if g == 0:
            is_op = True
        else:
            is_op = False

        d['thread_id'] = thread_id
        d['username'] = username
        d['url'] = url
        d['title'] = title
        d['post_id'] = post_id
        d['post_text'] = post_text
        d['reacts'] = len(react_givers)
        d['react_type'] = json.dumps(react_type)
        d['react_givers'] = json.dumps(react_givers)
        d['quote_ids'] = json.dumps(quote_ids)
        d['quote_authors'] = json.dumps(quote_authors)
        d['quote_texts'] = json.dumps(quote_texts)
        d['date'] = date
        d['epoch'] = epoch
        d['is_op'] = is_op

        cur.execute("INSERT INTO neets VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",tuple(d.values()))
        con.commit()

        sleep(0.5)
    cur.close()
    con.close()
    try:
        print('Date:',date,epoch,"\n")
    except:
        pass


Sending an updoot
Python:
import requests
from bs4 import BeautifulSoup
from pycookiecheat import BrowserType, chrome_cookies

def submit(post_id,react_id):
    s = requests.Session()
    s.max_redirects = 3

   ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##
    ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##

    ## Look up documentation for pycookiecheat library and change browser type to yours, I'm using Chromium ##

    cookies = chrome_cookies('https://neets.net', browser=BrowserType.CHROMIUM)

   ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##
   ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##

    fd = str(s.get(url, cookies = cookies).content.decode("utf-8"))
    soup = BeautifulSoup(fd,'html.parser')
    button_cookie = soup.find("input", {"type": "hidden"})['value']

    payload = {
        '_xfToken': button_cookie,
        'reaction_id': react_id
    }

    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
        'Accept-Encoding': 'gzip, deflate, br, zstd',
        'Accept-Language': 'en-US,en;q=0.9',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Origin': 'https://neets.net',
        'Referer': url,
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'
    }

    params = {"name": "_xfToken","value": button_cookie}
    print(params)

    react_link = 'https://neets.net/posts/' + str(post_id) + '/react'
    obj = s.post(react_link, headers=headers, data=payload, cookies=cookies, timeout=3, allow_redirects=False)

React ids for submit() function are defined as follows:
1: +1
2: <3
3: JFL
4: Woah
5: Damn...
6: Seriously?
7: Ugh
8: ...?
9: Based
10: "100"
11: Nerd

The rest is pure mechanics - who to updoot, based on what criteria etc.
 
Last edited:
D

Deleted member 2717

Chosen undead
Sep 19, 2024
108
Creating a database:
Python:
import sqlite3
import os.path

def db_start():
    file_bool = os.path.exists('sentiment.db')
    if file_bool == False:

        con = sqlite3.connect("sentiment.db")
        cur = con.cursor()
        cur.execute("CREATE TABLE neets(thread_id,username,url,title,post_id,post_text,reacts,react_type,react_givers,quote_ids,quote_authors,quote_texts,date,epoch,is_op)")
        con.commit()
        cur.close()
        con.close()
        print("DB Created")
        limit()
    else:
        print("DB already exists, skip creating it")

Parsing the thread and inserting data into the database:
Python:
import json
import sqlite3
from bs4 import BeautifulSoup
from time import sleep

def parse(thread_id):
    url = "https://neets.net/threads/" + str(thread_id)
    fd = str(requests.get(url).content.decode("utf-8"))
    fd = fd.replace('\n', ' ').replace('\r', '')

    soup = BeautifulSoup(fd,'html.parser')

    title = soup.title.text.split('|')[0][:-1]
    print('Title:',title)

    num = len(soup.find_all("div", {"class": "bbWrapper"}))

    d = {}
    d_list = []

    con = sqlite3.connect("sentiment.db")
    cur = con.cursor()

    for g in range(num):
        post_data = soup.find_all("article", {"class": "message message--post js-post js-inlineModContainer"})[g]
        username = str(post_data['data-author'])
        post_id = int(post_data['data-content'].split('-')[1])
        signature = post_data.find("h5", {"class": "userTitle message-userTitle"}).text

        user_id = avi_block['data-user-id']

        try:
            join_date = post_data.find("dd").text
        except:
            join_date = "0"

        time_block = post_data.find("time", {"class": "u-dt"})
        date = time_block['datetime']
        epoch = time_block['data-time']

        text_block = post_data.find("div", {"class": "message-content js-messageContent"})

        quote_ids = []
        quote_authors = []
        quote_texts = []

        quote = text_block.find_all("div", {"class": "bbCodeBlock-expandContent js-expandContent"})
        if quote is None:
            pass ## Post without quotes
        else:
            counter = 0
            for q in quote: ## Iterating over quotes
                quote_text = q.text
                if quote_text[0] == ' ':
                    quote_text = quote_text[1:]

                if quote_text[-1:] == ' ':
                    quote_text = quote_text[:-1]
                quote_source = text_block.find_all("blockquote", {"class": "bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"})[counter]
                counter = counter + 1
                try:
                    quote_id = int(quote_source['data-source'].split(' ')[1])
                    quote_author = quote_source['data-quote']
                except:
                    quote_id = 0
                    quote_author = "0"

                quote_ids.append(quote_id)
                quote_authors.append(quote_author)
                quote_texts.append(quote_text)

        post_text = post_data.find("div", {"class": "bbWrapper"})
        while post_text.find('blockquote'): ## Will strip all quotes
            post_text.blockquote.decompose()

        yt = post_text.find("span", {"data-s9e-mediaembed-c2l": "youtube"})
        if yt:
            yt_link = "https://youtube.com/watch?v=" + yt['data-s9e-mediaembed-iframe'].split(',')[7].split('/')[4].split("?")[0]

        post_text = post_text.text
        if post_text[0:4] == "    ":
            post_text = post_text[4:]

        if yt:
            if len(post_text) == 0:
                post_text = yt_link
            else:
                post_text = post_text + "\n" + yt_link

        footer = post_data.find("footer", {"class": "message-footer"})
        reacts_block = footer.find("ul", {"class": "reactionSummary"})
        react_type = []
        react_givers = []
        if reacts_block:
            for i in reacts_block.find_all("img"):
                react_type.append(i['alt'])
     
            for b in footer.find_all("bdi"):
                react_givers.append(b.text)

            if len(react_givers) == 3:
                rs =  footer.find("a", {"class": "reactionsBar-link"}).text
                if " other" in rs:
                    st = rs.find(" and ")
                    unknown_reacts = int(rs[st+5:].split(' ')[0])
                    for a in range(unknown_reacts):
                        react_givers.append("un__u")

        if g == 0:
            is_op = True
        else:
            is_op = False

        d['thread_id'] = thread_id
        d['username'] = username
        d['url'] = url
        d['title'] = title
        d['post_id'] = post_id
        d['post_text'] = post_text
        d['reacts'] = len(react_givers)
        d['react_type'] = json.dumps(react_type)
        d['react_givers'] = json.dumps(react_givers)
        d['quote_ids'] = json.dumps(quote_ids)
        d['quote_authors'] = json.dumps(quote_authors)
        d['quote_texts'] = json.dumps(quote_texts)
        d['date'] = date
        d['epoch'] = epoch
        d['is_op'] = is_op

        cur.execute("INSERT INTO neets VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",tuple(d.values()))
        con.commit()

        sleep(0.5)
    cur.close()
    con.close()
    try:
        print('Date:',date,epoch,"\n")
    except:
        pass


Sending an updoot
Python:
import requests
from bs4 import BeautifulSoup
from pycookiecheat import BrowserType, chrome_cookies

def submit(post_id,react_id):
    s = requests.Session()
    s.max_redirects = 3

   ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##
    ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##

    ## Look up documentation for pycookiecheat library and change browser type to yours, I'm using Chromium ##

    cookies = chrome_cookies('https://neets.net', browser=BrowserType.CHROMIUM)

   ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##
   ## !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ##

    fd = str(s.get(url, cookies = cookies).content.decode("utf-8"))
    soup = BeautifulSoup(fd,'html.parser')
    button_cookie = soup.find("input", {"type": "hidden"})['value']

    payload = {
        '_xfToken': button_cookie,
        'reaction_id': react_id
    }

    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
        'Accept-Encoding': 'gzip, deflate, br, zstd',
        'Accept-Language': 'en-US,en;q=0.9',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Origin': 'https://neets.net',
        'Referer': url,
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'
    }

    params = {"name": "_xfToken","value": button_cookie}
    print(params)

    react_link = 'https://neets.net/posts/' + str(post_id) + '/react'
    obj = s.post(react_link, headers=headers, data=payload, cookies=cookies, timeout=3, allow_redirects=False)

React ids for submit() function are defined as follows:


The rest is pure mechanics - who to updoot, based on what criteria etc.

big thanks big bro
 
Activity
So far there's no one here

Similar threads

Egyptiancel
Replies
7
Views
694
Fukumine.is
Fukumine.is
Lord_hierophantūs
Replies
3
Views
338
The Doctor
The Doctor
Neet194012940
Replies
7
Views
736
The Doctor
The Doctor
Deleted member 1379
Replies
46
Views
4K
Deleted member 2463
D
Top