You are viewing a single comment's thread from:

RE: Programming Diary #34: Your post now has a permanent payout window

in Steem Dev4 months ago

As somebody who curates manually, and also likes your idea, what would the best way of supporting this be? Presumably in my case, I'm best to keep my STEEM Power, rather than delegate it to you? Although without a decent SP base, can thoth have a big enough impact?

The original thought was for active posts. In that case, I imagined that you'd be able to scan the output for posts that look interesting and then click through and support the featured authors directly. Now that historical posts are in the mix, it gets more complicated. Of course, you could vote the Thoth post - but I get that there may be disqualifying content in some. You could also use an eversteem style comment/beneficiary - mechanism to individually support any interesting paid-out posts that it highlights (maybe I could even consider adding one reply to the Thoth post with an individual beneficiary for each included author? Not sure if I like that idea, though...).

For historical posts, if I find a way to deal with the cap on beneficiaries, then hopefully Thoth can get enough delegated for passive income in order to make a difference. Besides ROI and user retention, I suppose there are may also be SEO benefits from resurfacing and relinking lost content(?), so passive investors have multiple self-interested reasons to delegate.

Given that there are already people highlighting "active" content, should thoth focus on content that isn't active and has already paid out?

This is a good point. I think that's probably the case. At least near the beginning. It's funny how the project seems to be getting flipped upside down.

In my opinion, the posts selected don't highlight the best of Steem in the past or at present so I think the selection algorithm will need some adjustment. Proof of Concept though and in my opinion, it's looking good.

I completely agree. In addition to improving the post selection, I also need to stop it from picking the same post multiple times when someone edits an existing comment. The current version is definitely proof of concept.

I also wonder if Thoth could self-learn by analysing the success of each post it produces?

Definitely possible in the future. @cmp2020 had a machine-learning voting bot running a couple years ago when he was in school. Eventually, that sort of selection could be inserted between the screening and the LLM - or before the screening. That's a long way down the road, though, I think.

I'd also think about attention span. I'm not lazy but I'm "Time Poor". So it's too text heavy for my liking.

I agree on this point, too. I should have mentioned that in the next steps section. I'm going to have a table near the front with just the titles, authors, dates, tags, and maybe 100 words from the body, then I'll put the AI output in a separate section - maybe another table. That way the reader can scan the summary table first and decide which of the AI summaries they want to read.

You could even introduce some bias now based upon your own opinions of the /tags page.

Include/exclude of tags is one of the screens that I'm still planning to add. I think bias is good. I'd like to see multiple instances of Thoth all running with their own unique biases and then the community can decide which ones are most valuable.

Final thought - It's a shame that @thoth went to some pointless account 9 years ago.

Yep. That was one of the first things I checked, too. Disappointing. I have an alternate in mind that is available, but @thoth would've definitely been ideal. Thank you for the feedback!

Sort:  

(maybe I could even consider adding one reply to the Thoth post with an individual beneficiary for each included author? Not sure if I like that idea, though...).

No - it feels a bit "spammy" and perhaps overkill. I think that once you get the algorithm right, then it probably wouldn't be necessary anyway.

I suppose there are may also be SEO benefits from resurfacing and relinking lost content(?)

It's certainly possible - especially if the post passes the "follow" links threshold (which I'll take a look at at some point because it feels unhelpfully high at the moment).

I think the benefit's 2-fold - it might help to index pages that might have previously been missed and it might also give that content greater weight because of additional internal links to it.

Either way, it shouldn't do any harm.

It's funny how the project seems to be getting flipped upside down.

That's good - as the thoughts mature, the project will move in the right direction.

Definitely possible in the future. @cmp2020 had a machine-learning voting bot running a couple years ago when he was in school

ExcellentHappyGIF.gif

I can feel the potential with this project. Whilst I think that greed with the existing generation of voting bots will continue, this feels like a viable alternative that should benefit the "real Steemit community longer term.

 last month 

Holy moly, this was two and a half months ago.😮

No - it feels a bit "spammy" and perhaps overkill. I think that once you get the algorithm right, then it probably wouldn't be necessary anyway.

Check my reasoning here. I'm rethinking this.

  1. If I want to maximize returns for delegators, I need to post or comment 10 times per day. (remember that the fundamental goal is to create a mechanism that can reduce overall spam levels by outcompeting the content-indifferent voting services)
  2. If I post 10 times a day, that's covering up to 50 authors per day, and it's showing up in feeds 10 times a day. Finding 50 higher-quality posts per day can be challenging.
  3. If I post 2 times per day and create 1 reply per (included) post (i.e. up to 10 replies per day total), that's covering up to 10 authors per day, and it's only showing up in feeds twice. Additionally, it lets follow-up voters decide whether to direct rewards to all of the original posts or just to target the best.
  4. If I post 2 times per day with 5 replies each, one bad apple can't spoil the bunch, because follow-up voters can still reward individual selections, even if one of the included posts was a bad pick.
  5. Follow-up voters are more likely to engage with 10 included posts than 50.
  6. With 10 posts per day, I can direct beneficiary rewards to a max of 20 delegators per day (assuming 5 included authors per post).
  7. With 2 posts and 10 replies per day, I can direct beneficiary rewards to a max of 64 delegators per day (2 per top-level post, and 6 per reply).
  8. Eventually, if a machine-learning capability will be added, it will be useful to have individualized performance information from each included post.
  9. At current limits, free LLMs can support evaluations for 2 posts/10 authors per day. 10 posts/50 authors would almost certainly require a paid LLM.

Hence, 2 posts per day with 5 replies each would be the least spammy, and probably, the preferred solution.

 4 months ago 

No - it feels a bit "spammy" and perhaps overkill.

Exactly what I was thinking (though at current limits it would buy me 30 or 35 more slots for delegator beneficiaries 😉).

I think that once you get the algorithm right, then it probably wouldn't be necessary anyway.

That's what I'm hoping too.

I can feel the potential with this project. Whilst I think that greed with the existing generation of voting bots will continue, this feels like a viable alternative that should benefit the "real Steemit community longer term.

That's how I feel, too. It's obviously not a panacea, but there's a huge amount of buried value out there that can strengthen the ecosystem if we figure out good ways to "mine" it.