sakurajima.moe is one of the many independent Mastodon servers you can use to participate in the fediverse.
A comfy, non-toxic, and inclusive Anime/Manga/Japanese Media fandom and Japanese Media/Culture Content Creators (Bloggers/Artists/Video Creators/Podcasters/etc.) Mastodon Server

Server stats:

275
active users

#gemini

7 posts7 participants2 posts today
Public

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek
Public

3 years ago, as I just had created Offpunk, I started to implement what would become ansicat : a way to render HTML (or any format) in your terminal

Besides a lot of technical learning, the biggest lesson was probably how bad people are at writing texts and abusing any format.

ploum.net/2022-03-24-ansi_html

See the results here:

offpunk.net/install.html

ploum.netRendering the Web with Pictures in Your Terminal : more than you would like to know about HTML, ANSI and the philosophy of communication.
More from
Public

@ploum names what I call engagement ascetism as engagement rehabilitation and according to him its currently common reason for escaping into #gemini. So, there you go, the self does what the selves do.

A good read in Gemini gemini://ploum.net/2025-02-27- []

And in common web ploum.net/2025-02-27-engagemen

ploum.netThe Engagement Rehab
More from
Public

Random thought that I'm sure multitudes have had.

Now that #google is throwing unwanted #gemini shit onto my P7 and I can't set reminders any more by voice, what are the costs of subbing in a fully local #homeassistant Voice Assist as my "turn device X on / Remind me to do Y tomorrow at 8pm" alternative?

My phone is already permanently connected to home base with Wireguard. I'm thinking I'd need to run Ollama/Whisper/etc on some piece of hardware. Doing it in Docker on either a P4b or my N5105 NAS is too slow. What's the cheapest bit of hardware I can throw in to get the job done?