Using Gitlab at our company and I love it. I could never agree with the "small teams don't need/want time tracking" argument, so I'm very happy to see they released it for CE.
I am a bit unsure what Gitlab consider a small team. But I find myself getting a little exasperated when they choose to keep features from CE because "small teams don't need it". Last week I found this was the reason they did not include global contributer statistics in CE. This would be a trivial change, and an important one for my team of ~14 engineers with in-house tools and a microservice architecture who would like to see the pulse of development.
There are no features that are never needed by a small team. To make our business model work we do need to make some features exclusive to EE. The definition we use is: is this feature more relevant for organizations that have more than 100 potential users? We thought this was the case for timetracking but we were wrong and reversed the decision.
I am curious why you'd need to run monitoring on your own instances, since the assumption is that given a set of resources (say X RAM and Y CPU), you can support Z users. At least I remember they advertised such a number.
Now, if they somehow get to have these monitoring numbers automatically sent back to a centralized server where they can gather more insights about various installations, that would be killer, but I am not sure about the feasibility, giving the privacy implications (although plenty of open source software does it).
> I am curious why you'd need to run monitoring on your own instances, since the assumption is that given a set of resources (say X RAM and Y CPU), you can support Z users. At least I remember they advertised such a number.
But this is not enough. Not all hardware is created equal (one core is faster than another) and not all users are created equal (one might push many more branches per day). So to help people with the performance of their GitLab server we think the integrated metrics will be a large benefit.
But another important reason to add this is to ensure that GitLab has metrics about applications that are deployed with GitLab.
> Now, if they somehow get to have these monitoring numbers automatically sent back to a centralized server where they can gather more insights about various installations, that would be killer, but I am not sure about the feasibility, giving the privacy implications (although plenty of open source software does it).
We're very conscious of the privacy implications of sending data about GitLab usage back. We're doing that not with Prometheus but with a usage ping in GitLab EE. We're working on bringing that usage ping to CE, for our reasoning see https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/...
BTW People can use Prometheus monitoring in three forms: on-premise centralized, on-premises federated, and as a SaaS
On-premises centralized is what we just shipped.
Prometheus is easy to federate, where some of the metrics of a server are included in another one. In the future we might give every deployed application its own Prometheus server in a pod.
I really love Gitlab but there's this one thing that's been niggling me about Gitlab.com and it's that every time the app is updated my sessions are invalidated and I need to go through 2FA login all over again.
Mmm, that is not the case for me. We updated to 8.16.0 a few hours ago https://twitter.com/gitlabstatus/status/823219479287107585 and I didn't have to login today. Can you please email support@gitlab.com to figure out what is wrong? Please include a link to this comment.
Ah finally the disk usage breakdown. As someone who pays for Gitlab hoisting this is a huge deal. We see our usage number climb until the instance becomes unresponsive and no way to find where the growth is. Thank you!
>This release migrates project related statistics to a separate table, removing existing columns in the process. This migration process requires downtime, and can take 10-15 minutes for large installations.
Any place i can eyeball this upgrade script first?
Incidentally, Im curious to know (to try and learn) how do you guys test this kind of stuff? do you have lots of different databases saved over that you try this kind of a major db upgrade on?
It looks like [0] & [1] are the relevant DB migration scripts. GitLab.com is used to test new release candidate versions in a production environment before it is shipped. I'm guessing the 10-15 minute number came from how long it took them to run it on their systems. They do have staging environments where they run migrations (and test new code) on stale prod data as well.
And they swarmed the issue and found the issue (within the nginx config) very odd-ball one indeed that didn't affect everyone, fantastic troubleshooting by all the Gitlab engineers, excellent communication and top-notch support like I've never experienced from any software vendor before. A+++ would commit again ;)
Redmine is still better from the project management planning and control POV (i.e. issues can be in different queues, but still form a coherent hierarchy, milestones consisting from issues and the control of time spent/still required to finish them; the work needed can be not only programming, but also other activities, etc.).
Gitlab is much nicer for the developers.
So for a private project, Gitlab might be a worth to try.
We made the jump (moving various internal projects from Redmine, Trac and Assembla) and couldn't be happier. Everyone loves Gitlab, updates are frequent and full of new features, stability isn't a problem and CI is so easy to get up and running.
Only negative is that you need quite a beefy server to run it (compared to say Gitea or Gogs), but it's a small price to pay.