🍩 Doughnut Reader 🍩

setnone

138 comments

12 days ago

Hi HN,

I built this out of frustration of the evergrowing list of AI models and features to try and to fit my workflow.

The visual approach clicks for me so i went with it, it provides more freedom and control of the outcome, because predictable results and increased productivity is what I’m after when using conversational AI.

The app is packed with features, my most used are prompt library, voice input and text search, narration is useful too.

The app is local-first and works right in the browser, no sign up needed and it's absolutely free to try.

BYOAK – bring your own API Keys.

Let me know what you think, any feedback is appreciated!

For what it's worth, one CSS line lags the HELL out of my laptop on the site. It's backdrop-filter: blur(0.1875rem) for modals, like the youtube video popup

lIIllIIllIIllII

11 days ago

I'm a front-end dev and I refuse to apply this effect for this reason. Even on high end laptops it uses way too much power and starts blasting the fans.

wildrhythms

11 days ago

Does anyone know why the blur effect always takes so much power? Is there not a way to use the GPU, or is the problem something else entirely?

LorenzoBloedow

11 days ago

Sometime ago I had an idea for a similar interface without the dragging feature. Basically, just a tree visualisation. I usually discuss a tangent topic in the same conversation, but I don't want to confuse the AI afterwards, so I edit a previous message when the tangent started. However, OpenAI would discard that tangent tree, instead it would be nice to have a tree of the tangent topics explored, without necessarily having to sort them manually, just visualising the tree.

carlosbaraza

11 days ago

ChatGPT keeps the full tree doesn't it? You can swap back and forth on any particular node last I checked.

IanCal

11 days ago

I haven't seen that. So i have actually built what parent wrote.

So it seems i did waste time unnecessarily... but where exactly do i find the full tree in ChatGPT convos?

endofreach

11 days ago

I don’t think it’s available on mobile, if that’s where you are. On desktop, you can switch between previous edits.

I’d be interested in seeing what you made though because I’m really interested in the idea of a branching UI

noahjk

11 days ago

It's all kept but it's not a nice UI. When you change a question you get (on the site, maybe just desktop?) a left and right button to move between the different variations.

One thing you could do is import your data, as the exported conversations have this full tree last time I tried.

IanCal

11 days ago

Interesting take! It does seem to address a typical "intermediate" workflow; even though we prefer linear finished products, we often work by completing a hierarchy first. I've been using Gingko [1] for years, I find it eases the struggle of organizing the structure of a problem by both allowing endless expansion of levels, and easily collapsing it into a linear structure.

In your case, do you hold N contexts (N being the number of leaves in the tree)? Are the chats disconnected from each other? How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

1: https://gingkowriter.com/

btbuildem

12 days ago

Great questions!

> In your case, do you hold N contexts (N being the number of leaves in the tree)?

It depends, contexts are just a form of grouping

> Are the chats disconnected from each other? > How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

RAG with in-app commands, i'm working on a local RAG solution, it's early but promising. Basically chat with all your data and applying a wide range of command on it.

setnone

12 days ago

> How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

Why would they, though? For me as a potential user of this (and someone who thought about building a tool like this for myself), the tree (or better, a directed graph) is the desired end result.

TeMPOraL

11 days ago

Slightly OT, but there was a standalone software just like gingko for the Mac. Do you now something about it?

Edit: I think it was an old version of gingko as a desktop app. Still available at https://github.com/gingko/client/releases

Ringz

12 days ago

Are you thinking of FlowList?

https://www.flowtoolz.com/flowlist/

floam

12 days ago

Thanks, but that’s not the one. It was like a pure Markdown outliner, very keyboard driven.

Ringz

12 days ago

[deleted]

12 days ago

Are you thinking of Bike?

https://www.hogbaysoftware.com/bike/

(Maybe not — this isn’t markdown first; but it is a very macOS-y, keyboard driven, hierarchical outliner that I enjoy.)

ludwigschubert

12 days ago

Bike looks very nice and it’s built on open file formats. I will try it out. Look at my edit above: it might be an old version of ginkgo. But I’m on my phone right now and can’t figure it out…

Ringz

12 days ago

> Gingko

A subscription pricing model for software where everything should stay on my machine is a no-go for me

djeastm

10 days ago

You can create something like this easily by yourself using Obsidian and a plugin like https://github.com/AndreBaltazar8/obsidian-canvas-conversati...

Hrun0

12 days ago

It's like when I replaced dropbox with just a few scripts and sftp.

varispeed

12 days ago

Syncthing, actually.

I think you were joking but the benefit of designing software at personal scale is often an exponential reduction in complexity.

igor47

11 days ago

"easily"? well, no except you're a techie.

siva7

11 days ago

What's the hassle for normal users?

1. Open Settings -> Community Plugins

2. Search for "Canvas Conversation" and install.

Done!

niutech

11 days ago

From watching the demo it looks interesting, but I figure I would get tired of dragging nodes around and looking for ones that I'm interested in. Does it allow searching?

It would be more interesting to me if it could use AI as an agent to create a graph view - or at least propose/highlight followup questions that self-organize into a graph.

pants2

12 days ago

Yes, search is one of my favorite features here, try '/' shortcut

setnone

12 days ago

> I would get tired of dragging nodes around

Me personally i find value in taking my time to organize and drag around, probably because i'm a visual thinker

setnone

12 days ago

The only feedback I would give is I'm suspicious of (will not buy) closed sourced AI anything. With that said: thank you for sloughing off the subscription model trend! That is welcome.

But going open source so that I know "for sure" no telemetry is being sent and charging for support would be the only way to get money out of me for this. I'm probably the odd one out for this, so take that with a fair helping of salt.

This is a great idea, so much so that this is also something I could probably put together a MVP of in a weekend (or two) of dedicated work (the fancy features that I personally don't care about would probably take longer to implement, of course...).

Good work! Keep it up.

altruios

12 days ago

> But going open source so that I know "for sure" no telemetry is being sent and charging for support would be the only way to get money out of me for this.

Is the self hosted option a workable solution for you?

https://www.grafychat.com/d/docs/selfhost

Unless it's minified I guess.

IanCal

11 days ago

I would only use this (or any ai) self-hosted if it works 100% offline.

I would also not want it minified - as I would want the freedom to tinker with it to my personal specifications. Which makes me ask a question: what rights would I have to modify this software, per your license?

altruios

11 days ago

> Which makes me ask a question: what rights would I have to modify this software, per your license?

Permission to deploy and use compiled code only, no modification, redistribution, sharing or reselling.

setnone

10 days ago

And the 'no modification' is exact-why you lose me.

altruios

7 days ago

> I would only use this (or any ai) self-hosted if it works 100% offline.

Well this is absolutely possible with ollama app running on your device and self-host package running on localhost.

setnone

10 days ago

Thank you!

I would love if we had some kind of 'open-build' methodology so those projects not willing to open the source but are willing to perform any kind of necessary audit against the build, just a thought.

setnone

11 days ago

I like this and wished openai or anthropic enabled similar in their UIs... it would be simple actually: "create a new chat from here"

Otherwise, great job! It's cool, but it's pricey and that is a personal deterrence.

ramoz

12 days ago

I've pegged my thinking on software purchases to local McDonald's drive-thru menu equivalencies.

gopher_space

12 days ago

macdonalds is so overpriced, so I cannot condone this method :)

diebillionaires

12 days ago

Third-party clients support this. I like MindMac for instance - it's the "Fork from this message" feature.

shreezus

11 days ago

I find editing a previous question accomplishes this well, the existing UI already keeps all your previous edits in a revision tree.

tippytippytango

11 days ago

Good landing page, explained to me the product well enough. I like your concept also as i wished sometimes for something similiar in the past.

siva7

11 days ago

The demo you shared shows you are creating child chat from the original parent chat. Have you tried something like connecting merging two child chats to create a subsequent child chat? Or maybe simply creating a child chat from a previous child chat?

rajarsheem

12 days ago

I wish there was a node to load a folder of JSON, TXT or CSV files, pipe them one by one and collect the outputs in another folder. Like a LLM pipeline / prompt editor.

visarga

12 days ago

something i built as an add on but would be nice to integrate into some of these front ends would be a find replace key: value store to assist in potentially "leaking" something.

if you could replace IPs or domains or subdomains with a filler domain like something.contoso.com and send that to chatgpt instead of my internal domain that would be a feature that I would pay money for.

like i said i have an implementation written in python for this but its an add on to an additional frontend which makes it extra clunky.

Wheaties466

11 days ago

I always feel like whiteboarding & concept mapping is better when it comes to generative AI, especially when it comes to the nature that we are chat in a "multimodal way" these days -- just think of old plain text SMS compared to mems links rich-text powered IM tools nowadays.

Congrats! you may also check flowith and ai.affine.pro for similar selling points.

Also, heptabase is good and they will definitely make a ai version soon or later.

entherhe

12 days ago

Congrats on the launch. I'll take a closer look soon and only played around a bit.

Would be great if you could extend the documentation.

If you're not open sourcing the app, what about at least open sourcing the documentation?

One thing I'd like to extend is on https://www.grafychat.com/d/docs/intro

3. Configure Ollama server to make sure it allows connection from grafychat.

That's not very helpful. Something along the line Set the environment variable OLLAMA_ORIGINS to "https://www.grafychat.com" and rerun "ollama serve". Use your custom host if your using the self-host option.

  ```sh
  OLAMA_ORIGINS="https://www.grafychat.com" ollama serve
  ```
Is not that much more text but makes it way easier for people to go and try out your app with ollama.

seedie

11 days ago

Thanks for the feedback!

Open sourcing documentation is an interesting idea.

> Something along the line Set the environment variable OLLAMA_ORIGINS

I'll test it out and update the docs.

setnone

10 days ago

This is wild! What have you found it most useful for?

Have you tried a more straightforward approach that follows the ChatGPT model of being able to fork a chat thread? I could use something like this where I can fork a chat thread and see my old thread(s) as a tree, but continue participating in a new thread. Your model seems more powerful, but also more complex.

ntonozzi

12 days ago

This is my daily GPT driver, so for almost anything from research to keeping my snippets tidy and well organized. I use voice input a lot to take my time and form my thoughts and requests, text-to-speech to listen for answers too.

setnone

12 days ago

I have to admit, I don't get it. (And I want to be clear that's a personal statement, not an overall comment on the app. It looks quite well done, and if others get value from it, awesome!)

But for me, I'm stuck with questions. What's the point of drawing connectors, there seems no implied data flow? Is this just for you as a reminder of the hierarchy of your queries? Or do you actually set the upstream chat as a context, and reflow the whole thing if you change upstream queries? (That one would definitely be fun to play with - still not sure about long-term value, but def. interesting)

Good luck, and looking forward to see where you're taking this!

groby_b

12 days ago

Seems like organized chatGPT in the form of mind mapping. It’s quite intuitive to me because I’ve had some chats where I kept scrolling back to the first gpt response. Therefore, you can map out a question and answer, then create nodes for follow up about specific details. Each branch of the tree structure can organize a rabbit hole of follow ups on a specific topic.

jonnycoder

11 days ago

Thank you!

Like I mentioned earlier for me the app is canvas-based first, node-based second. So connections are a tool, a visual tool to craft or manage prompt to then feed it to LLM. Canvas is a visual tool to organize and keep large amounts of chats.

I try use LLM not for the sake of chatting, but to get results and those tools seem to help me with that.

Hope that makes sense.

setnone

12 days ago

Looks interesting. I had an idea last year that I never acted on that was down this same path.

The design looks really nice.

At this point, my chats are so brief and infrequent compared to a year ago. The standard UI is more than I need at this point as I never reference back to any past chat.

I also find I seem to get the best answer from the least context. Extra context seems to hurt more than it helps for my uses.

Down the line something like this might be an obvious interface everyone will want to use. Right now though its not even something I need to try.

Congrats though. I love BYOAK.

borgdefense

10 days ago

Very cool! I built a version of this [1], but balked at trying to sell it. This is the third iteration of this idea I've seen so far. Your reply popup is a smart feature and a nice touch! Love it. I love the privacy focus and BYOK, as well.

Congrats on the launch!

Really cool to see graph interfaces for AI having their moment. :)

[1] https://coloring.thinkout.app/

joshuahutt

12 days ago

Wow, this is really cool! Thanks for sharing!

diebillionaires

11 days ago

Thanks! Are you able to figure it out?

Feel free to message me if you're willing to chat about it. Would love to know if it's actually useful for you.

joshua@huttj.com

joshuahutt

10 days ago

[deleted]

11 days ago

Very nice! Thanks for sharing, will definitely give it a try. I think we settled for chat interface to play with LLMs, but there's nothing really holding us back to try new ways.

yoouareperfect

12 days ago

Yeah, I'm annoyed that OpenAI has deprecated its text completion models and API. I think there's a ton of value to be had from constrained generation like what's available with the Guidance library.

x3haloed

12 days ago

Your full-stack dev graph seems to have 75 queries in it.

Please consider providing a demo video showing how this works with code work.

I get the overall behavior, but sometimes code segments can be quite long, or multiple specific sections need to be combined to create additional context.

It would be helpful to see the current baseline product behavior for interaction on a "common" coding task, solving problems in typescript and / or python.

bredren

12 days ago

Thank you for the feedback!

I'm planning to release more videos, stay tuned.

setnone

11 days ago

Thank you so much for building this, it's exactly what I was looking for!

Love the license instead of subscription model. Also loved that I can start trying right away without any hassle.

Couple suggestions:

I can't decide between Extended and Premium options. What does "premium support" mean?

Also, it only shows an upgrade option in the check-out page, perhaps it'd be interesting to include it in the FAQ and also the Pricing section.

rmbyrro

12 days ago

Thank you!

> What does "premium support" mean?

Premium option includes prioritized support and access to new features that might be unavailable for other types of licenses.

I will update the website for more clarity.

setnone

12 days ago

A tree visualization like this one would be great as a complement to tabs in web browsing, especially on a monster display.

buescher

11 days ago

Amazing work, kudos! Love the canvas, drag'n'drop and line connectors, did you use a library or made it yourself?

wan888888

12 days ago

For a text based version of the "tree of chats" idea, using Emacs, Org mode and gptel see `gptel-org-branching-context`in: https://github.com/karthink/gptel?tab=readme-ov-file#extra-o...

yaantc

12 days ago

Of course, it can be done with emacs and org mode...

It's almost like every software or library will get ported to JavaScript eventually, with the difference, emacs and org mode was before.

tomfreemax

12 days ago

[deleted]

12 days ago

Didn't find it in the documentation. How would I go about if I want to self-host it for a small team of like 14 people?

Should I buy licenses for 14 (3x extended) instances, or 1 for all, where everyone can see everyone's conversations or are there accounts? I have a central ollama instance running and also Openai API keys.

Thank you.

tomfreemax

12 days ago

> How would I go about if I want to self-host it for a small team of like 14 people

> Should I buy licenses for 14 (3x extended) instances

Yes that should work. Each license comes with 5 seat/activations. Each seat has its own copy of the data.

setnone

12 days ago

This looks really cool. I did not expect to see something I might actually buy but this is something that could be very nice for me :-)

Will the Self-host package include source (i.e. source available) or is it just the transpiler output?

Also, is there (or plan to be) support for postgres or other database for persistence?

freedomben

12 days ago

Thank you!

> Will the Self-host package include source (i.e. source available) or is it just the transpiler output?

No sources, just a folder with compiled assets that you can run on a static server. This is already available.

> Also, is there (or plan to be) support for postgres or other database for persistence?

Yes there are plans for local pg

setnone

12 days ago

nice, something I didn't know I needed :D

might want to increase font weight in the pricing section, it's hard to read

also in "How much does it cost?" I think you should also add the Free option (for those like me who missed the Try For Free button at the top)

xucian

11 days ago

thanks for the tips!

setnone

10 days ago

Do you plan to open source it? I will love to extend it. I had similar ideas about non linear UI.

raxrb

12 days ago

Wow. I was so frustrated with chat that I was almost going to write something like this myself. Now I don't have to :)

Curious about the business model here though. How much sales have you had so far, if you don't mind me asking?

LASR

12 days ago

Curious why you settled on the BYOAK approach rather than a subscription approach

iknownthing

12 days ago

Subscription fatigue is real :)

setnone

12 days ago

I have to say, I didn't realize it was no subscription before I saw this comment. Makes it much more interesting from the start.

Yes, I hate subscriptions. Love your approach.

I also love that you focus on your strength which is the intuitive and flexible interface, rather than LLM or prompts or whatever. Like this its also very extensible, as every good tool should be.

tomfreemax

12 days ago

I was thinking it was because it would be easier than keeping track of usage which I assume you would need to do with a subscription based model i.e. all users using your key.

iknownthing

12 days ago

[deleted]

12 days ago

Congrats on the launch - I love this. Organizing text is often the hard part when working with LLMs.

Only thing I don't love is heavy mouse use. Are there keyboard shortcuts for all the operations shown?

causal

12 days ago

Thanks!

> Are there keyboard shortcuts for all the operations shown?

For now yes, what would you like to be added?

setnone

12 days ago

Make sure to have very tight limits on any API key you provide to someone else. They could burn through tens of thousands of dollars each day if you do not have security in place.

7734128

12 days ago

It looks like you put a lot of work into this but node based workflows are ok when they're a necessary evil and just an evil otherwise.

I'd be more interested in a tool where I can "add data" to it by drag and drop or folder import, then I can just type whatever prompt and the app's RAG system pulls relevant data/previous prompts/etc out of its store ranked by relevance, and I can just click on all the things that I want inserted into my context with a warning if I'm getting near the context limit. I waste a lot of time finding the relevant code/snippets to paste in manually.

CuriouslyC

12 days ago

This sounds a lot like my dream setup, We've been slowly building something along those lines. I've linked a video at the bottom that shows how we did something similar with an Obsidian plugin. Hit me up if you're interested in more details, we'd be happy to get an alpha user who gets it.

We've mostly had trouble explaining to ppl what exactly it is that we're building, which is fine, since we're mostly building for us, but still it seems like something like this would be the killer app for LLMs

Obsidian Canvas UI demo -> https://www.youtube.com/watch?v=1tDIXoXRziA

Also linking out Obsidian plugin repo in case someone wants to dive deeper into what we're about -> https://github.com/cloud-atlas-ai/obsidian-client

durch

12 days ago

> I'd be more interested in a tool where I can "add data" to it by drag and drop or folder import, then I can just type whatever prompt and the app's RAG system pulls relevant data/previous

This is something very similar to what i'm planning to add next, so stick around.

setnone

12 days ago

Well here’s a somewhat limited version of your idea and really only helps mitigate the copy/paste effort with coding: https://github.com/backnotprop/prompt-tower

My original idea was a DnD interface that works at the os level as a HUD… and functions like your idea but that is not so simple to develop.

ramoz

12 days ago

For me this interface is canvas-based first, node-based second, meaning sometimes I might not even use connections to get my desired result from LLM but i have the place and form for the result and i know how to find it. Connections here are not set in stone like in mind mapping software for example, it's a tool.

setnone

12 days ago

Can you go get acquired by Phind please? Brainstorming with the robots is a non-linear activity and I believe you are on the right track.

midnitewarrior

11 days ago

It seems to work well but a desktop app (or self hosted) is essential. I can't paste in valuable API keys to a third party website.

teruakohatu

12 days ago

Desktop app is coming soon and self-host option is already available as a part of Extended License.

I have no plans to open source it at the moment, but it would be great to come up with something like 'open build' for cases like that.

setnone

12 days ago

The purchase screen made me think self hosted was coming soon for extended. How far off is desktop and will the desktop be self-hosted or an interface to the website ?

teruakohatu

12 days ago

Not far off, some days i would say.

Yes it's a wrapper with opted-out sentry and vercel analitycs, just like self-host package.

setnone

12 days ago

Congrats on the launch! I love that you let ppl try it without even signing up! The mobile experience needs to work tho.

noashavit

12 days ago

Powerful stuff, this is the kind of workspace I've been waiting for for AI. Excited to see how it evolves!

_akhe

12 days ago

I wanted the same for myself but balked at the amount of work I'd need to do to implement it :)

Great job!

troupo

11 days ago

Looks cool! How can I host it?

Zambyte

12 days ago

Thanks! Self-host package comes with Extended license

setnone

12 days ago

Can you share details of the technology stack used to build the tool?

brunoborges

12 days ago

Awesome, this is similar to the thread conversations on Slack.

damnever

11 days ago

Super cool, would be great for prompt engineering and iteration

dangoodmanUT

12 days ago

looks packed with stuff, how long did it take u to build this?

jdthedisciple

12 days ago

This is interesting and all, but it's a tad complex to use. AI is supposed to simplify your life, but this just ends up making things more complicated.

Ask -> answer, no more steps, that is the core value of ChatGPT or AI.

shanghaikid

11 days ago

Suppose I have a conversation with ChatGPT about a macro, or better yet, a series of macros. We reach the 10th sub-module, but suddenly, I find a bug in module 2 (20 minutes ago chat). While I could redirect the chat back to module 2, it's a bit convoluted. Ideally, I'd want to return to an earlier point in the conversation, resolve module 2, and then continue where we left off. However, if I update my response from 20 chats ago, I risk orphaning the rest of the conversation. The response lag also complicates things because I might move on to new ideas or debugging tasks in the meantime. I suppose I should say because of the lag time, I’m not in sync with the chat, that lag affords me the opportunity to keep doing other things. If the chat was more like groq maybe it would be less the case - not sure.

The other thing I find is that if I change how I replied/asked, I get a different answer. I like the idea I can fork this node and evaluate outcomes based on my varied inputs. You’re right it’s hugely more complex. But its complexity I think I'd love to have available.

social_quotient

11 days ago

> Ask -> answer, no more steps, that is the core value of ChatGPT or AI.

This is the absolutely ideal state of the product, i agree.

setnone

11 days ago

Cool!

You have a typo in the word ‘presicion’

Ironically

whiddershins

11 days ago

Thanks!

> Ironically

of all the places :)

setnone

10 days ago

This is great. More importantly - I love the pricing!!

nirav72

12 days ago

This seems very cool and I'd like to try it out

mubu

12 days ago

Great stuff! Interesting usecases will be present

nssmeher

12 days ago

Yes! this is what i've been thinking about!

_boffin_

12 days ago

Interesting choice of questions in the demo.

Are you from Nepal?

pasaley

12 days ago

No but I'm a frequent visitor, i love the mountains there!

setnone

12 days ago

Nice! This is really cool. Well done.

rfc

12 days ago

i wish perplexity had a similar ui option. so I can out my research in multiple paths.

asadalt

12 days ago

Hard to try it on my phone.

p1esk

11 days ago

Excellent UI! I love it.

subhashp

11 days ago

I built a similar demo to this but for images - IMO this is a much better structure for working with LLMs as it allows you to really riff with a machine instead of feeling like you need a deterministic "next step"

https://youtu.be/k_mJgFmdWWY

kkukshtel

12 days ago

Sweet demo, you should do a Show HN! This is much more interesting to me, as the visual element makes much more sense here rather than just putting entire paragraphs in nodes.

dvt

12 days ago

Thanks for the encouragement! I just put up a post, hope other people like it!

kkukshtel

11 days ago

The text nodes is also interesting, it's like a mind map, I can see how it could be great for learning, planning, collaboration, exploring...

serial_dev

11 days ago

Looks good, I tried it out and it is indeed alpha in many regards(e.g. sometimes it does not save a picture on windows, sometimes it does not show the prompt, ..) , but the idea has potential. I would encourage you to keep working on it (and maybe keep in mind, that if this suddenly gets viral and you have no API limits in place, you might get poor quickly).

lukan

11 days ago

Yeah the idea was mostly to put a stake in the ground for an early UX experiment (I released it last year), but it's been in the back of my mind as something to continue experimenting with and honestly rebuilding for web in the custom game engine I'm working on.

kkukshtel

11 days ago

Looks amazing! The Unity client is quite sleek. I'd wager the creative play can be taken to the next level with a low-latency model like https://fal.ai/models/fast-turbo-diffusion-turbo

ag_hn

12 days ago

What I really want to do is make it model agnostic. SDXL was an easy choice at the time, but you could really easily just make it be a local model or any hosted visual model with an endpoint. The core idea is just tying an LLM to an image model and tying those to a force-directed graph, so really anything could be an input (or an output - you could also do it with text)

kkukshtel

11 days ago

Great stuff! That deterministic "next step" is the last line of defense for us humans :)

setnone

12 days ago

[deleted]

11 days ago

[flagged]

ananya_paw

11 days ago

[flagged]

unnouinceput

12 days ago

This is the worst kind of feedback comment.

It's a damn ChatGPT front-end, do you expect it to be written in PHP?

Also, this site plays perfectly fine with uBlock Origin. If you're going to throw criticism out, at least verify that what you're saying is correct.

Honestly, reading your comment history, you really should be aware of the fact that most of what you're putting on this site is at least not positive, if not actually negative.

snet0

12 days ago

Those must be Sentry

setnone

12 days ago