Hacker Newsnew | past | comments | ask | show | jobs | submit | Ycros's commentslogin

I'd recommend anyone looking at these three languages to give Odin a try.

I second that! I was trying Zig for some small projects, but ended up switching to Odin, because I found it much more comfortable!

Is this why I've seen a number of "AUP violation" false positives popping up in claude code recently?


ollama has always had a weird attitude towards upstream, and then they wonder why many in the community don't like them


> they wonder why many in the community don't like them

Do they? They probably care more about their "partners".

As GP said:

  By reimplementing this layer, Ollama gets to enjoy a kind of LTS status that their partners rely on


It uses a video feed and asks you to look in certain directions. At least the one instance I've encountered did.


Yeah. Certainly something AI generated video couldn’t solve.


It shouldn't be to difficult to determine if the camera is pointed at a real face vs a screen showing an AI generated image.


This seems reasonable to me, surely it should be its own repository.


I prefer btop, it does all the usual process monitoring as well as gpus in the latest versions.


Really? Mine is v1.3.2 and doesn't show Intel Iris Xe Graphics!

{UPDATE} I see: no Intel GPU support yet!


Having played around with this sort of thing in the llama.cpp ecosystem when they added it a few weeks ago, I will say that it also helps if your models a) are tuned to output json and b) you prompt them to do so. Anything you can do to help the output fit the grammar helps.


Every time I look at LangChain it seems like unnecessary abstraction. The value in this example are the prompts.


So what are the alternatives to LangChain that the HN crowd uses?

I see two contenders:

https://github.com/minimaxir/simpleaichat/tree/main/simpleai...

https://github.com/griptape-ai/griptape

There is also the llm command line utility that has a very thin underlying library, but which might grow eventually: https://github.com/simonw/llm


Just code it yourself. Most of the core logic can be replaced with a function that that inserts some parameters into a string template and calls an API.


This was the answer for myself as well, pretty cool that we are still at the level where if you have an idea you can build a proof extremely quickly and easily.


I've been enjoying using (and contributing to) Langroid, it's a new multi-agent LLM framework https://github.com/langroid/langroid


I've been actively contributing to Langroid as well. It is easy to use, and the intuitive design allows for the rapid development of LLM applications, streamlining the whole process. Highly recommended for anyone looking into this space!


If you work with JS or TS, check out this alternative that I've been working on:

https://github.com/lgrammel/modelfusion

It lets you stay in full control over the prompts and control flow while make a lot of things easier and more convenient.


LMQL - https://lmql.ai/

Guidance (microsoft) - almost abandoned - https://github.com/microsoft/guidance


How do you know guidance is almost abandoned? Did they announce it?


    import openai
    import os

    openai.api_key = os.environ.get('OPENAI_API_KEY')

    def completion(messages):
        response = openai.ChatCompletion.create(
            model = gpt_model, temperature = 0, messages = messages )
        return response['choices'][0]['message']['content'].strip()

    response = completion([
              {"role": "system", "content": "You are a helpful assistant."},
              {"role": "user", "content": "Who won the world series in 2020?"} ])

    #####

    import json
    import tiktoken
    import os

    tokenizer = tiktoken.get_encoding("cl100k_base")
     
    class Message:
        def __init__(self, role, text, length=None):
            self.role = role
            self.text = text
            if length != None:
                self.length = length
            else:
                self.length = self._count_tokens(text)
            print("New message, token length is",self.length)

        def _count_tokens(self, text):
            tokens = tokenizer.encode(text)
            return len(tokens)

    class History:
        def __init__(self, ID=None):
            self.messages = []
            self.ID = ID

            if self.ID:
                self._load_from_json()

        def add(self, role, text):
            message = Message(role, text)
            self.messages.append(message)
            self._save_to_json()

        def _save_to_json(self):
            if not self.ID:
                return

            data = {
                "messages": [{"role": m.role, "text": m.text, "length": m.length} for m in self.messages]
            }
            self.create_dir_if_not_exists('conversations')
     
            with open(f"conversations/{self.ID}.json", "w") as f:
                json.dump(data, f)

        def create_dir_if_not_exists(self, directory_path):
            if not os.path.exists(directory_path):
                os.makedirs(directory_path)

        def _load_from_json(self):
            try:
                self.create_dir_if_not_exists('conversations')
                with open(f"conversations/{self.ID}.json", "r") as f:
                    data = json.load(f)
                    self.messages = [Message(m["role"], m["text"]) for m in data["messages"]]
            except FileNotFoundError:
                pass

        def recent_messages(self, max_tokens):
            recent_messages_reversed = []
            total_tokens = 0

            for m in reversed(self.messages):
                if total_tokens + m.length <= max_tokens:
                    recent_messages_reversed.append({
                        "role": m.role,
                        "content": m.text
                    })
                    total_tokens += m.length
                else:
                    break

            recent_messages = recent_messages_reversed[::-1]

            return recent_messages


In your loop:

            for m in reversed(self.messages):
                if total_tokens + m.length <= max_tokens:
                    recent_messages_reversed.append({
                        "role": m.role,
                        "content": m.text
                    })
                    total_tokens += m.length
                else:
                    break
It would be important to change that to not drop system prompts, ever. Otherwise a user can defeat the system prompt simply by providing enough user messages.


Good point. The way I use it though is to always add the system prompt to the front after calling that function.


This feels like it should be a charity or a non-profit entity.


There's already a lot of foundations that haven't been able to scale the process of paying thousands of developers due to tax & employment issues across the globe. We think creating the right commercial incentives would have a better chance but we might also be wrong. Time will tell...


The recently launched Playdate handheld gaming device uses a monochrome Sharp brand Memory LCD with no backlight.


Yeah, so the reflective LCD technology isn't quite dead. I guess when both reflectivity and quick refresh rates are required, monochrome LCD is still the only solution, since Liquavista and Mirasol were discontinued. For color displays there is simply no solution at all with decent reflectivity I believe. The E Ink Gallery 3 display seems to mostly solve the low reflectivity problem, since it does not rely on standard additive sub-pixel color mixing. But there is no similar solution for higher refresh rates. There were improved Mirasol prototypes which apparently solved the issue, but shortly after they were shown, the development of the Mirasol technology was discontinued.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: