Hacker Newsnew | past | comments | ask | show | jobs | submit | stratospark's commentslogin

Prompts are just the starting point. Take image generation for example and the rise of ComfyUI and ControlNet, with complex node based workflows allowing for even more creative control. https://www.google.com/search?q=comfyui+workflows&tbm=isch

I see these AI models as lowering the barrier to entry, while giving more power to the users that choose to explore that direction.


All that amounts to just more complex ways of nudging the prompt, because that prompt is all an LLM can "comprehend." You still have no actual creative control, the black box is still doing everything. You didn't clear the barrier to entry, you just stole the valor of real artists.


So wrong. There are some great modern artists in the AI space now who are using the advanced AI tools to advance their craft.. look at eclectic method before AI and look at how he evolving artistically with AI


Shadiversity made the same class of attribution error. AI users aren't evolving artistically, the software they are using to simulate art is improving over time. They are not creators, they are consumers.


can photography be an art? all a photographer does is to run around the world with a camera and take snapshots. he has no creative control.


Photographers have a great deal of creative control. Put the same camera in your hands versus a professional and you will get different results even with the same subject. You taking a snapshot in the woods are not Ansel Adams, nor are you taking a selfie Annie Leibovitz. The skill and artistic intent of the human being using the tool matters.

Meanwhile with AI, given the same model and inputs - including a prompt which may include the names of specific artists "in the style of x" - one can reproduce mathematically equivalent results, regardless of the person using it. If one can perfectly replicate the work by simply replicating the tools, then the human using the tool adds nothing of unique personal value to the end result. Even if one were to concede that AI generated content were art, it still wouldn't be the art of the user, it would be the art of the model.



Very cool! I shared a small project to demo and explain how I used convolutional neural networks to classify food images: http://blog.stratospark.com/deep-learning-applied-food-class....

I'd be curious about the calorie detection. I'm wondering if it's using some kind of weighted sum of image segmentation proportions, or doing end-to-end deep learning.

Anyway, cool product, love to see where it goes!


Hey, that's a really cool project.

We haven't tried going directly from image to calories yet, and I'm not sure that we ever will. Instead the plan is to do end-to-end portion size prediction for some of the classes. Segmentation would be cool but it's really hard to get the data for it.

By the way, plotting images with matplotlib is a pain. Try using HTML with base64 encoded images instead. Something like this should work:

  def base64image(path_or_image, prefix='data:image/jpeg;base64,'):
    s = BytesIO()
    get_pil_image(path_or_image).save(s, format='JPEG')
    return prefix + base64.b64encode(s.getvalue()).decode('utf-8')


  def show_images(paths_or_images, predictions=None, sz=200, urls=False):
    from IPython.core.display import display, HTML
    predictions = predictions if predictions is not None else []
    img_tags = map(lambda p: '''
      <div style="display: inline-block; margin: 2px; width: {sz}px; height: {sz}px; position: relative">
        <img src="{b}"
             style="max-height: 100%; max-width: 100%;
                   position: absolute; left: 50%; top: 50%; transform: translate(-50%, -50%);
                   border: {bsz}px solid rgba(255, 0, 0, {pred});"/>
      </div>
      '''.format(b=p[0] if urls else base64image(p[0]), pred=1 - p[1] if p[1] is not None else 0, sz=sz, bsz=5),
                 zip_longest(paths_or_images, predictions))
    display(HTML('<div style="text-align: center">{}<div>'.format(''.join(img_tags))))


code for training in a Python Jupyter Notebook, plus the web UI in React and Keras.js is available here: https://github.com/stratospark/food-101-keras


I've taken a look at the product site and I love it! I think it's perfect. Our son is obsessed with Minecraft and has recently been expressing interest about learning how to program. One question about the pricing, is would I be able to go through the lessons as well as my son? Or would we need to pay for separate accounts? The FAQ mentions something about up to 4 players per server.


Check out Field: http://openendedgroup.com/field

What sort of project are you working on?


Oh, I was just working on some simple Python site. Our instructor required us to use Python to generate the whole html document, and I kinda have a bunch of if statements for generating diff html elements and stuff. It was quite tedious that he asked us to do that, but it's an entry level course.. so things don't really make sense in real world. I would really be benefited by an IDE that has the feature of previewing the output of some code, that way I don't have to run it in the browser every time I make any chances.

Anyway, thanks for your recommendation! Field looks like an interesting IDE to try out with its awesome "code canvas" feature. I'm not too concerned about the Processing/artistic part of it :p


Exploratory trying is a form of discoverability. Plus the whole social context of learning new gestures through friends using similar devices.

I looked at a Blackberry Playbook at the store and tried to use it without having ever seen anyone else use one before. I spent a good 10 minutes trying to figure out how to close an app before finally looking it up online with my iPhone.


Maybe I used the wrong word. Affordances? The "affordance" of pinch-to-zoom is a flat surface.


We need a Todo example project to make this legit: https://github.com/addyosmani/todomvc/tree/master/todo-examp...


Interesting. With this and ClojureScript, I might need to take a deeper look at Closure.


Detrus mentioned Processing and openFrameworks. Basically programming languages for visual artists.

Also Maker Faire for artsy DIY and hardware stuff: http://makerfaire.com/

Zer01, a Bay Area art/technology network: http://zero1.org/

I volunteer for those last two, send me a message if you're in the Bay Area and want to chat! I'm definitely a techie software engineer, but I'm fascinated by art, good design, and grand ideas about anything =)


Thanks, I'm not in the Bay Area now but planning to move there in the next year. I'll definitely be looking more into both of those.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: