Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right or Entity Framework, but they are just ORM's which is just hiding the impedance mismatch.

I am thinking more along the lines of an embedded db like SQLite but does native serialization, indexing and query optimization against .Net objects and types.

Then to make a "database server" is really just an app server running your c#. If you really wanted to make it like pg then you would have an app server that accepts code snippets, compiles runs and returns results. Of course you would need proper sandboxing like .net fiddle. You could also do something in between like serializing Linq expression trees to send them to server for processing.

In an ideal world where your code runs (client, app server, db server) is a choice based on locality needs not forced by runtimes (javascript in the browser, .Net on the app server, pg/Sql on the db server).

You can pretty much get close to this now with javascript/node/plV8, but again I prefer C#/.Net.



This is basically what most rdbmses have been offering for the last 20 years. Oracle rdbms is in fact an application server, and best practice is to have an app layer built using plsql package through which you access all the data.

More than 20 years ago I actually used it as a true application server, where the database application layer would even generate all the HTML code required to displayed your app ui.

And with database schemas you have proper sandboxing.


I'm with you on this one. I was thinking it would be cool to "bake in" the DB layer of an application and have it run in-process. Then you'd just provide it with block storage and it would handle the rest without having to have a separate server. You could even do compile-time migration and index generation based on your queries. With a distributed actor framework like Akka.NET or Orleans, it could probably be made to scale too.

The only downside is that the only interface to your data is through the application, but there's certainly a use-case for something like that.


So... Marten DB...


As far I know Marten doesn't run .Net in the PG process and just converts .Net objects to JSON and Linq statements into SQL. Am I missing something?


Well you don’t need to worry about creation, and migrations. You only need to tell the store which objects it needs to know about. You declare everything in c#.

It’s stores it as jsonb. So you don’t need to worry about weird relationships you only worry about your root aggregate. You can index it. Apply full text search. And persisted columns in the map if for convince or speed.

So Yeah it maps the linq to the equiv sql for you. But allows you to work with your object and worry less about persistence. Cos you can add and remove properties. It’s the closest thing I’ve come to, to being able to forget about the database and focus on code.

I Guess unless you write your own database from the ground up so it supports the language as a feature without dropping back to something else. It won’t happen.


Again that's great, its an ORM, that's what they do, try to abstract the DB from your language, and do their best to hide the impedance mismatch.

However the mismatch is always there, the overhead of mapping types, inefficient serialization (key names repeated over and over in json vs a real schema), certain statements cannot be transpiled to sql so you have pathologic cases at unexpected times, in ability to run any statement you want in the db tier to take advantage of locality and so on.

Most everyone wants a object database since most people are working with objects which is why ORM are so popular.

Closet thing I ever saw was db4o.


It’s jsonb so you can’t repeat key names. Serialization is what ever you want to use for json serialization. Other than some report type queries I wanted to write I haven’t found anything that I couldn’t do in linq.


jsonb doesn't do key interning as far as I know. Every field name will be repeated in every record vs a normal column which will only have the column name once in the db.

This really adds up on a large database. Mongo used to recommend short key names and I even think some client would translate them based on some definition to try and save disk/memory/io

https://github.com/postgrespro/zson tries to do key compression.

Also its not that Linq can't do what you want, its that if you do certain things that can't be translated to SQL the whole intermediate result will be retrieved from the db, deserialized then the linq statement will be run in memory on the client. Its like putting ToList in the middle of your linq statement where a maybe a few million rows might need to be processed to give the very small result.

If the linq statement was being run in process on the db server at least it would happen local to the data and would be more akin to a table scan vs index, still not great but much better then sending all the results to the client for processing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: