Compare commits

...

73 Commits

Author SHA1 Message Date
Andras Bacsai
41c84e3642 Merge pull request #1001 from coollabsio/next
v3.12.29
2023-03-20 13:57:01 +01:00
Andras Bacsai
2bad98424f switch back to aarch-runners 2023-03-20 13:49:41 +01:00
Andras Bacsai
bc6b1e2dea fix: remove .git dir from final image 2023-03-20 13:05:53 +01:00
Andras Bacsai
911c15d1be update versions 2023-03-20 12:44:45 +01:00
Andras Bacsai
f79d570870 fix: gitea 2023-03-20 12:28:23 +01:00
Andras Bacsai
7fffa9fba5 Merge branch 'main' into next 2023-03-20 12:05:21 +01:00
Andras Bacsai
cbd634fb99 Update README.md 2023-03-17 15:31:00 +01:00
Andras Bacsai
7ae7436d4f Update staging-release.yml 2023-03-17 15:27:16 +01:00
Andras Bacsai
641bada100 ignore dockerhub releases 2023-03-16 13:54:58 +01:00
Andras Bacsai
3416d8d88e only arm 2023-03-16 13:42:19 +01:00
Andras Bacsai
0bb503368b concurrency 2023-03-16 13:37:49 +01:00
Andras Bacsai
ac3a77c3c7 no qemu 2023-03-16 13:35:04 +01:00
Andras Bacsai
79b4178d76 vcpu increase 2023-03-16 13:32:48 +01:00
Andras Bacsai
42a61296d7 test buildjet 2023-03-16 13:29:13 +01:00
Andras Bacsai
e8088e2a70 Merge pull request #993 from coollabsio/next
fix: revert from dockerhub if ghcr.io does not exists
2023-03-16 13:10:58 +01:00
Andras Bacsai
c4d39aced2 fix: revert from dockerhub if ghcr.io does not exists 2023-03-16 13:10:34 +01:00
Andras Bacsai
b40a5adeb0 update GH actions 2023-03-16 12:28:40 +01:00
Andras Bacsai
558a900620 Merge pull request #992 from coollabsio/next
Move to ghcr.io
2023-03-16 12:18:57 +01:00
Andras Bacsai
6b5e5a504d updates 2023-03-16 12:09:48 +01:00
Andras Bacsai
e44dca2464 updates 2023-03-16 12:01:57 +01:00
Andras Bacsai
e1f84b277a updates 2023-03-16 11:57:04 +01:00
Andras Bacsai
2518f46b08 remove fluentbit + pocketbase builds 2023-03-16 11:56:19 +01:00
Andras Bacsai
01e18a9496 Merge pull request #991 from coollabsio/ghcr
Move to ghcr from dockerhub
2023-03-16 10:55:22 +01:00
Andras Bacsai
564ca709d3 updates 2023-03-16 10:53:54 +01:00
Andras Bacsai
a54a36ae18 updates 2023-03-16 10:50:26 +01:00
Andras Bacsai
43603b0961 update 2023-03-16 10:26:20 +01:00
Andras Bacsai
96cd99f904 fixes 2023-03-16 10:23:14 +01:00
Andras Bacsai
3438d10e25 test 2023-03-16 10:13:44 +01:00
Andras Bacsai
022ccb42a1 test 2023-03-16 10:04:53 +01:00
Andras Bacsai
e6d72e9f87 test 2023-03-16 09:56:39 +01:00
Andras Bacsai
06e8a6af23 test 2023-03-16 09:38:20 +01:00
Andras Bacsai
ac188d137a test 2023-03-16 09:32:20 +01:00
Andras Bacsai
cae466745a test 2023-03-16 09:10:11 +01:00
Andras Bacsai
d61f16dab0 test 2023-03-16 08:48:37 +01:00
Andras Bacsai
02ba277a86 fix: show ip address as host in public dbs 2023-03-07 13:25:08 +01:00
Andras Bacsai
470ff49a02 Merge pull request #981 from coollabsio/next
v3.12.26
2023-03-07 12:19:36 +01:00
Andras Bacsai
04d741581d Merge pull request #980 from hyenabyte/main
Fixing multiple remotes breaking the server overview
2023-03-07 12:12:59 +01:00
Andras Bacsai
038f210148 Merge branch 'main' into next 2023-03-07 11:47:29 +01:00
Andras Bacsai
2adad3a7bd fix: handle log format volumes 2023-03-07 11:46:23 +01:00
Andras Bacsai
05fb26a49b remove console logs 2023-03-07 11:15:43 +01:00
Andras Bacsai
1c237affb4 feat: add host path to any container 2023-03-07 11:15:05 +01:00
Andras Bacsai
3e81d7e9cb fix: replace . & .. & $PWD with ~ 2023-03-07 10:44:53 +01:00
David Koch Gregersen
edb66620c1 Adding a check when reading ssh config file
Also adds comments to the createRemoteEngineConfiguration function
2023-03-07 10:43:34 +01:00
Andras Bacsai
04f7e8e777 fix: host volumes 2023-03-07 10:31:10 +01:00
David Koch Gregersen
eee201013c Fixing multiple remotes breaking the server overview 2023-03-06 22:31:01 +01:00
Andras Bacsai
1190cb4ea1 Update deployApplication.ts 2023-03-04 18:49:34 +01:00
Andras Bacsai
507100ea0b Update package.json 2023-03-04 18:48:38 +01:00
Andras Bacsai
9b13912b6d Update common.ts 2023-03-04 18:48:28 +01:00
Andras Bacsai
ee65deebfd fix: nestjs buildpack 2023-03-04 18:05:01 +01:00
Andras Bacsai
ba9fa442d1 Merge pull request #973 from coollabsio/next
v3.12.23
2023-03-04 15:11:42 +01:00
Andras Bacsai
87da27f9bf fix: publishDirectory 2023-03-04 15:06:35 +01:00
Andras Bacsai
b5bc5fe2c6 Merge pull request #965 from coollabsio/next
v3.12.22
2023-03-03 09:13:56 +01:00
Andras Bacsai
d2329360d0 Merge pull request #950 from hemangjoshi37a/main
added `star-history`
2023-03-02 17:33:22 +01:00
Andras Bacsai
7ece0ae10a Merge pull request #955 from eltociear/patch-1
fix typo in _GitlabRepositories.svelte
2023-03-02 17:32:26 +01:00
Andras Bacsai
f931b47eb8 Merge pull request #957 from addianto/fix/pack
Fix PACK_VERSION build argument in Dockerfile
2023-03-02 17:31:55 +01:00
Andras Bacsai
7f7eb12ded fix: empty port in docker compose 2023-03-02 17:22:49 +01:00
Andras Bacsai
c0940f7a19 fix: cannot delete resource when you are not on root team 2023-03-02 17:12:29 +01:00
Andras Bacsai
9dfde11e35 possible fix: vaultwarden 2023-03-02 16:56:44 +01:00
Andras Bacsai
6f15cc2dbc fix: base directory not found 2023-03-02 16:52:55 +01:00
Daya Adianto
120308638f fix: set PACK_VERSION to 0.27.0
This commit removes the `v` prefix in the version identifier assigned to
PACK_VERSION build argument. The `pack` script actually available on
Coolify's CDN, but named without `v` prefix in the script's version identifier.

Related issue: #689, which reported that the `pack` script in the
container image is a HTML 404 file instead of the actual `pack`
executable.
2023-02-25 17:27:41 +07:00
Ikko Eltociear Ashimine
1d04ef99bb fix typo in _GitlabRepositories.svelte
occured -> occurred
2023-02-24 11:24:55 +09:00
Hemang Joshi
9b00d177ef added star-history
added `star-history`
2023-02-22 16:21:32 +05:30
Andras Bacsai
884524c448 Merge pull request #945 from coollabsio/next
v3.12.21
2023-02-21 13:55:29 +01:00
Andras Bacsai
3ae1e7e87d remove debug 2023-02-21 13:47:28 +01:00
Andras Bacsai
81f885311d debug 2023-02-21 13:24:23 +01:00
Andras Bacsai
d9362f09d8 debug 2023-02-21 13:23:34 +01:00
Andras Bacsai
906d181d1b debug 2023-02-21 13:15:17 +01:00
Andras Bacsai
44b8812a7b debug 2023-02-21 13:08:14 +01:00
Andras Bacsai
3308c45e88 Merge pull request #943 from coollabsio/next
v3.12.21
2023-02-21 13:02:39 +01:00
Andras Bacsai
e530ecf9f9 fix 2023-02-21 12:59:21 +01:00
Andras Bacsai
51b5edb04f hmm fix 2023-02-21 12:48:06 +01:00
Andras Bacsai
f0d89f850e fix 2023-02-21 12:45:22 +01:00
Andras Bacsai
b777e08542 fix: arm servics 2023-02-21 12:35:20 +01:00
49 changed files with 732 additions and 524 deletions

View File

@@ -1,12 +1,9 @@
name: fluent-bit-release name: Production Release to DockerHub
on: on:
push: push:
paths: branches:
- "others/fluentbit" - "this-branch-does-not-exists"
- ".github/workflows/fluent-bit-release.yml"
branches:
- next
jobs: jobs:
arm64: arm64:
@@ -23,13 +20,18 @@ jobs:
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push - name: Build and push
uses: docker/build-push-action@v2 uses: docker/build-push-action@v2
with: with:
context: others/fluentbit/ context: .
platforms: linux/arm64 platforms: linux/arm64
push: true push: true
tags: coollabsio/coolify-fluent-bit:1.0.0-arm64 tags: coollabsio/coolify:${{steps.package-version.outputs.current-version}}-arm64
cache-from: type=registry,ref=coollabsio/coolify:buildcache-arm64
cache-to: type=registry,ref=coollabsio/coolify:buildcache-arm64,mode=max
amd64: amd64:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
@@ -44,13 +46,18 @@ jobs:
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push - name: Build and push
uses: docker/build-push-action@v3 uses: docker/build-push-action@v3
with: with:
context: others/fluentbit/ context: .
platforms: linux/amd64 platforms: linux/amd64
push: true push: true
tags: coollabsio/coolify-fluent-bit:1.0.0-amd64 tags: coollabsio/coolify:${{steps.package-version.outputs.current-version}}
cache-from: type=registry,ref=coollabsio/coolify:buildcache-amd64
cache-to: type=registry,ref=coollabsio/coolify:buildcache-amd64,mode=max
aarch64: aarch64:
runs-on: [self-hosted, arm64] runs-on: [self-hosted, arm64]
steps: steps:
@@ -65,13 +72,18 @@ jobs:
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push - name: Build and push
uses: docker/build-push-action@v2 uses: docker/build-push-action@v2
with: with:
context: others/fluentbit/ context: .
platforms: linux/aarch64 platforms: linux/aarch64
push: true push: true
tags: coollabsio/coolify-fluent-bit:1.0.0-aarch64 tags: coollabsio/coolify:${{steps.package-version.outputs.current-version}}-aarch64
cache-from: type=registry,ref=coollabsio/coolify:buildcache-aarch64
cache-to: type=registry,ref=coollabsio/coolify:buildcache-aarch64,mode=max
merge-manifest: merge-manifest:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [amd64, arm64, aarch64] needs: [amd64, arm64, aarch64]
@@ -87,7 +99,14 @@ jobs:
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Create & publish manifest - name: Create & publish manifest
run: | run: |
docker manifest create coollabsio/coolify-fluent-bit:1.0.0 --amend coollabsio/coolify-fluent-bit:1.0.0-amd64 --amend coollabsio/coolify-fluent-bit:1.0.0-arm64 --amend coollabsio/coolify-fluent-bit:1.0.0-aarch64 docker buildx imagetools create --append coollabsio/coolify:${{steps.package-version.outputs.current-version}}-arm64 --append coollabsio/coolify:${{steps.package-version.outputs.current-version}}-aarch64 --tag coollabsio/coolify:${{steps.package-version.outputs.current-version}}
docker manifest push coollabsio/coolify-fluent-bit:1.0.0 docker buildx imagetools create coollabsio/coolify:${{steps.package-version.outputs.current-version}} --tag coollabsio/coolify:latest
- uses: sarisia/actions-status-discord@v1
if: always()
with:
webhook: ${{ secrets.DISCORD_WEBHOOK_PROD_RELEASE_CHANNEL }}

View File

@@ -1,36 +1,14 @@
name: production-release name: Production Release to ghcr.io
on: on:
release: release:
types: [released] types: [released]
env:
REGISTRY: ghcr.io
IMAGE_NAME: "coollabsio/coolify"
jobs: jobs:
arm64:
runs-on: [self-hosted, arm64]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/arm64
push: true
tags: coollabsio/coolify:${{steps.package-version.outputs.current-version}}-arm64
cache-from: type=registry,ref=coollabsio/coolify:buildcache-arm64
cache-to: type=registry,ref=coollabsio/coolify:buildcache-arm64,mode=max
amd64: amd64:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
@@ -40,23 +18,27 @@ jobs:
uses: docker/setup-qemu-action@v2 uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v2
- name: Login to DockerHub - name: Login to ghcr.io
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} registry: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }} username: ${{ github.actor }}
- name: Get current package version password: ${{ secrets.GITHUB_TOKEN }}
uses: martinbeentjes/npm-get-version-action@v1.2.3 - name: Extract metadata
id: package-version id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v3 uses: docker/build-push-action@v3
with: with:
context: . context: .
platforms: linux/amd64 platforms: linux/amd64
push: true push: true
tags: coollabsio/coolify:${{steps.package-version.outputs.current-version}} tags: ${{ steps.meta.outputs.tags }}
cache-from: type=registry,ref=coollabsio/coolify:buildcache-amd64 labels: ${{ steps.meta.outputs.labels }}
cache-to: type=registry,ref=coollabsio/coolify:buildcache-amd64,mode=max
aarch64: aarch64:
runs-on: [self-hosted, arm64] runs-on: [self-hosted, arm64]
steps: steps:
@@ -66,26 +48,30 @@ jobs:
uses: docker/setup-qemu-action@v1 uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1 uses: docker/setup-buildx-action@v1
- name: Login to DockerHub - name: Login to ghcr.io
uses: docker/login-action@v1 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} registry: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }} username: ${{ github.actor }}
- name: Get current package version password: ${{ secrets.GITHUB_TOKEN }}
uses: martinbeentjes/npm-get-version-action@v1.2.3 - name: Extract metadata
id: package-version id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}-aarch64
- name: Build and push - name: Build and push
uses: docker/build-push-action@v2 uses: docker/build-push-action@v3
with: with:
context: . context: .
platforms: linux/aarch64 platforms: linux/aarch64
push: true push: true
tags: coollabsio/coolify:${{steps.package-version.outputs.current-version}}-aarch64 tags: ${{ steps.meta.outputs.tags }}-aarch64
cache-from: type=registry,ref=coollabsio/coolify:buildcache-aarch64 labels: ${{ steps.meta.outputs.labels }}
cache-to: type=registry,ref=coollabsio/coolify:buildcache-aarch64,mode=max
merge-manifest: merge-manifest:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [amd64, arm64, aarch64] needs: [amd64, aarch64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
@@ -93,18 +79,23 @@ jobs:
uses: docker/setup-qemu-action@v2 uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v2
- name: Login to DockerHub - name: Login to ghcr.io
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} registry: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }} username: ${{ github.actor }}
- name: Get current package version password: ${{ secrets.GITHUB_TOKEN }}
uses: martinbeentjes/npm-get-version-action@v1.2.3 - name: Extract metadata
id: package-version id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
- name: Create & publish manifest - name: Create & publish manifest
run: | run: |
docker buildx imagetools create --append coollabsio/coolify:${{steps.package-version.outputs.current-version}}-arm64 --append coollabsio/coolify:${{steps.package-version.outputs.current-version}}-aarch64 --tag coollabsio/coolify:${{steps.package-version.outputs.current-version}} docker buildx imagetools create --append ${{ fromJSON(steps.meta.outputs.json).tags[0] }}-aarch64 --tag ${{ fromJSON(steps.meta.outputs.json).tags[0] }}
docker buildx imagetools create coollabsio/coolify:${{steps.package-version.outputs.current-version}} --tag coollabsio/coolify:latest docker buildx imagetools create --append ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest-aarch64 --tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
- uses: sarisia/actions-status-discord@v1 - uses: sarisia/actions-status-discord@v1
if: always() if: always()
with: with:

110
.github/workflows/release-candidate.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
name: Release Candidate to ghcr.io
on:
release:
types: [prereleased]
env:
REGISTRY: ghcr.io
IMAGE_NAME: "coollabsio/coolify"
jobs:
amd64:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to ghcr.io
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
aarch64:
runs-on: [self-hosted, arm64]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to ghcr.io
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/aarch64
push: true
tags: ${{ steps.meta.outputs.tags }}-aarch64
labels: ${{ steps.meta.outputs.labels }}
merge-manifest:
runs-on: ubuntu-latest
needs: [amd64, aarch64]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to ghcr.io
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Create & publish manifest
run: |
docker buildx imagetools create --append ${{ steps.meta.outputs.tags }}-aarch64 --tag ${{ steps.meta.outputs.tags }}
- uses: sarisia/actions-status-discord@v1
if: always()
with:
webhook: ${{ secrets.DISCORD_WEBHOOK_DEV_RELEASE_CHANNEL }}

View File

@@ -1,19 +1,18 @@
name: pocketbase-release name: Staging Release to DockerHub
on: on:
push: push:
paths:
- "others/pocketbase/*"
- ".github/workflows/pocketbase-release.yml"
branches: branches:
- next - "this-branch-does-not-exists"
- main
jobs: jobs:
arm64: arm64:
runs-on: [self-hosted, arm64] runs-on: [self-hosted, arm64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
with:
ref: "next"
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v1 uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx - name: Set up Docker Buildx
@@ -23,18 +22,25 @@ jobs:
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push - name: Build and push
uses: docker/build-push-action@v2 uses: docker/build-push-action@v2
with: with:
context: others/pocketbase/ context: .
platforms: linux/arm64 platforms: linux/arm64
push: true push: true
tags: coollabsio/pocketbase:0.12.3-arm64 tags: coollabsio/coolify:next-arm64
cache-from: type=registry,ref=coollabsio/coolify:buildcache-next-arm64
cache-to: type=registry,ref=coollabsio/coolify:buildcache-next-arm64,mode=max
amd64: amd64:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
with:
ref: "next"
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v2 uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx - name: Set up Docker Buildx
@@ -44,37 +50,21 @@ jobs:
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push - name: Build and push
uses: docker/build-push-action@v3 uses: docker/build-push-action@v3
with: with:
context: others/pocketbase/ context: .
platforms: linux/amd64 platforms: linux/amd64
push: true push: true
tags: coollabsio/pocketbase:0.12.3-amd64 tags: coollabsio/coolify:next
aarch64: cache-from: type=registry,ref=coollabsio/coolify:buildcache-next-amd64
runs-on: [self-hosted, arm64] cache-to: type=registry,ref=coollabsio/coolify:buildcache-next-amd64,mode=max
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: others/pocketbase/
platforms: linux/aarch64
push: true
tags: coollabsio/pocketbase:0.12.3-aarch64
merge-manifest: merge-manifest:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [amd64, arm64, aarch64] needs: [arm64, amd64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
@@ -89,5 +79,8 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Create & publish manifest - name: Create & publish manifest
run: | run: |
docker manifest create coollabsio/pocketbase:0.12.3 --amend coollabsio/pocketbase:0.12.3-amd64 --amend coollabsio/pocketbase:0.12.3-arm64 --amend coollabsio/pocketbase:0.12.3-aarch64 docker buildx imagetools create --append coollabsio/coolify:next-arm64 --tag coollabsio/coolify:next
docker manifest push coollabsio/pocketbase:0.12.3 - uses: sarisia/actions-status-discord@v1
if: always()
with:
webhook: ${{ secrets.DISCORD_WEBHOOK_DEV_RELEASE_CHANNEL }}

View File

@@ -1,76 +1,77 @@
name: staging-release name: Staging Release to ghcr.io
concurrency:
group: staging_environment
cancel-in-progress: true
on: on:
push: push:
paths: branches-ignore:
- "**" - "main"
- "!others/fluentbit" - "v4"
- "!others/pocketbase" env:
- "!.github/workflows/fluent-bit-release.yml" REGISTRY: ghcr.io
- "!.github/workflows/pocketbase-release.yml" IMAGE_NAME: "coollabsio/coolify"
branches:
- next
jobs: jobs:
arm64:
runs-on: [self-hosted, arm64]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
ref: "next"
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get current package version
uses: martinbeentjes/npm-get-version-action@v1.2.3
id: package-version
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/arm64
push: true
tags: coollabsio/coolify:next-arm64
cache-from: type=registry,ref=coollabsio/coolify:buildcache-next-arm64
cache-to: type=registry,ref=coollabsio/coolify:buildcache-next-arm64,mode=max
amd64: amd64:
runs-on: ubuntu-latest runs-on: ubuntu-22.04
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
with: with:
ref: "next" ref: "next"
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v2
- name: Login to DockerHub - name: Login to ghcr.io
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} registry: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }} username: ${{ github.actor }}
- name: Get current package version password: ${{ secrets.GITHUB_TOKEN }}
uses: martinbeentjes/npm-get-version-action@v1.2.3 - name: Extract metadata (tags, labels)
id: package-version id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v3 uses: docker/build-push-action@v3
with: with:
context: . context: .
platforms: linux/amd64 platforms: linux/amd64
push: true push: true
tags: coollabsio/coolify:next tags: ${{ steps.meta.outputs.tags }}
cache-from: type=registry,ref=coollabsio/coolify:buildcache-next-amd64 labels: ${{ steps.meta.outputs.labels }}
cache-to: type=registry,ref=coollabsio/coolify:buildcache-next-amd64,mode=max aarch64:
runs-on:
group: aarch-runners
steps:
- name: Checkout
uses: actions/checkout@v3
with:
ref: "next"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to ghcr.io
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/aarch64
push: true
tags: ${{ steps.meta.outputs.tags }}-aarch64
labels: ${{ steps.meta.outputs.labels }}
merge-manifest: merge-manifest:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [arm64, amd64] needs: [amd64, aarch64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
@@ -78,14 +79,20 @@ jobs:
uses: docker/setup-qemu-action@v2 uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v2
- name: Login to DockerHub - name: Login to ghcr.io
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} registry: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Create & publish manifest - name: Create & publish manifest
run: | run: |
docker buildx imagetools create --append coollabsio/coolify:next-arm64 --tag coollabsio/coolify:next docker buildx imagetools create --append ${{ steps.meta.outputs.tags }}-aarch64 --tag ${{ steps.meta.outputs.tags }}
- uses: sarisia/actions-status-discord@v1 - uses: sarisia/actions-status-discord@v1
if: always() if: always()
with: with:

View File

@@ -22,7 +22,7 @@ ARG DOCKER_VERSION=20.10.18
# Reverted to 2.6.1 because of this https://github.com/docker/compose/issues/9704. 2.9.0 still has a bug. # Reverted to 2.6.1 because of this https://github.com/docker/compose/issues/9704. 2.9.0 still has a bug.
ARG DOCKER_COMPOSE_VERSION=2.6.1 ARG DOCKER_COMPOSE_VERSION=2.6.1
# https://github.com/buildpacks/pack/releases # https://github.com/buildpacks/pack/releases
ARG PACK_VERSION=v0.27.0 ARG PACK_VERSION=0.27.0
RUN apt update && apt -y install --no-install-recommends ca-certificates git git-lfs openssh-client curl jq cmake sqlite3 openssl psmisc python3 RUN apt update && apt -y install --no-install-recommends ca-certificates git git-lfs openssh-client curl jq cmake sqlite3 openssl psmisc python3
RUN apt-get clean autoclean && apt-get autoremove --yes && rm -rf /var/lib/{apt,dpkg,cache,log}/ RUN apt-get clean autoclean && apt-get autoremove --yes && rm -rf /var/lib/{apt,dpkg,cache,log}/
@@ -38,7 +38,7 @@ RUN curl -SL https://cdn.coollabs.io/bin/$TARGETPLATFORM/pack-$PACK_VERSION -o /
RUN chmod +x ~/.docker/cli-plugins/docker-compose /usr/bin/docker /usr/local/bin/pack RUN chmod +x ~/.docker/cli-plugins/docker-compose /usr/bin/docker /usr/local/bin/pack
COPY --from=build /app/apps/api/build/ . COPY --from=build /app/apps/api/build/ .
COPY --from=build /app/others/fluentbit/ ./fluentbit # COPY --from=build /app/others/fluentbit/ ./fluentbit
COPY --from=build /app/apps/ui/build/ ./public COPY --from=build /app/apps/ui/build/ ./public
COPY --from=build /app/apps/api/prisma/ ./prisma COPY --from=build /app/apps/api/prisma/ ./prisma
COPY --from=build /app/apps/api/package.json . COPY --from=build /app/apps/api/package.json .
@@ -50,4 +50,4 @@ RUN pnpm install -p
EXPOSE 3000 EXPOSE 3000
ENV CHECKPOINT_DISABLE=1 ENV CHECKPOINT_DISABLE=1
CMD pnpm start CMD pnpm start

View File

@@ -9,7 +9,7 @@ ARG DOCKER_VERSION=20.10.18
# Reverted to 2.6.1 because of this https://github.com/docker/compose/issues/9704. 2.9.0 still has a bug. # Reverted to 2.6.1 because of this https://github.com/docker/compose/issues/9704. 2.9.0 still has a bug.
ARG DOCKER_COMPOSE_VERSION=2.6.1 ARG DOCKER_COMPOSE_VERSION=2.6.1
# https://github.com/buildpacks/pack/releases # https://github.com/buildpacks/pack/releases
ARG PACK_VERSION=v0.27.0 ARG PACK_VERSION=0.27.0
WORKDIR /app WORKDIR /app
RUN npm --no-update-notifier --no-fund --global install pnpm@${PNPM_VERSION} RUN npm --no-update-notifier --no-fund --global install pnpm@${PNPM_VERSION}
@@ -28,4 +28,4 @@ RUN curl -SL https://cdn.coollabs.io/bin/$TARGETPLATFORM/pack-$PACK_VERSION -o /
RUN chmod +x ~/.docker/cli-plugins/docker-compose /usr/bin/docker /usr/local/bin/pack RUN chmod +x ~/.docker/cli-plugins/docker-compose /usr/bin/docker /usr/local/bin/pack
EXPOSE 3000 EXPOSE 3000
ENV CHECKPOINT_DISABLE=1 ENV CHECKPOINT_DISABLE=1

View File

@@ -100,7 +100,7 @@ Deploy your resource to:
- Mastodon: [@andrasbacsai@fosstodon.org](https://fosstodon.org/@andrasbacsai) - Mastodon: [@andrasbacsai@fosstodon.org](https://fosstodon.org/@andrasbacsai)
- Telegram: [@andrasbacsai](https://t.me/andrasbacsai) - Telegram: [@andrasbacsai](https://t.me/andrasbacsai)
- Twitter: [@andrasbacsai](https://twitter.com/andrasbacsai) - Twitter: [@andrasbacsai](https://twitter.com/heyandras)
- Email: [andras@coollabs.io](mailto:andras@coollabs.io) - Email: [andras@coollabs.io](mailto:andras@coollabs.io)
- Discord: [Invitation](https://coollabs.io/discord) - Discord: [Invitation](https://coollabs.io/discord)
@@ -153,3 +153,6 @@ Support this project with your organization. Your logo will show up here with a
<a href="https://opencollective.com/coollabsio"><img src="https://opencollective.com/coollabsio/individuals.svg?width=890"></a> <a href="https://opencollective.com/coollabsio"><img src="https://opencollective.com/coollabsio/individuals.svg?width=890"></a>
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=coollabsio/coolify&type=Date)](https://star-history.com/#coollabsio/coolify&Date)

View File

@@ -230,7 +230,7 @@
description: "Open Source realtime backend in 1 file" description: "Open Source realtime backend in 1 file"
services: services:
$$id: $$id:
image: coollabsio/pocketbase:$$core_version image: ghcr.io/coollabsio/pocketbase:$$core_version
volumes: volumes:
- $$id-data:/app/pb_data - $$id-data:/app/pb_data
ports: ports:
@@ -414,6 +414,7 @@
proxy: proxy:
- port: "22" - port: "22"
hostPort: $$config_hostport_ssh hostPort: $$config_hostport_ssh
- port: "3000"
variables: variables:
- id: $$config_hostport_ssh - id: $$config_hostport_ssh
name: SSH_PORT name: SSH_PORT

View File

@@ -0,0 +1,2 @@
-- AlterTable
ALTER TABLE "ApplicationPersistentStorage" ADD COLUMN "hostPath" TEXT;

View File

@@ -195,6 +195,7 @@ model ApplicationSettings {
model ApplicationPersistentStorage { model ApplicationPersistentStorage {
id String @id @default(cuid()) id String @id @default(cuid())
applicationId String applicationId String
hostPath String?
path String path String
oldPath Boolean @default(false) oldPath Boolean @default(false)
createdAt DateTime @default(now()) createdAt DateTime @default(now())

View File

@@ -402,14 +402,14 @@ async function autoUpdater() {
if (!isDev) { if (!isDev) {
const { isAutoUpdateEnabled } = await prisma.setting.findFirst(); const { isAutoUpdateEnabled } = await prisma.setting.findFirst();
if (isAutoUpdateEnabled) { if (isAutoUpdateEnabled) {
await executeCommand({ command: `docker pull coollabsio/coolify:${latestVersion}` }); await executeCommand({ command: `docker pull ghcr.io/coollabsio/coolify:${latestVersion}` });
await executeCommand({ shell: true, command: `env | grep '^COOLIFY' > .env` }); await executeCommand({ shell: true, command: `env | grep '^COOLIFY' > .env` });
await executeCommand({ await executeCommand({
command: `sed -i '/COOLIFY_AUTO_UPDATE=/cCOOLIFY_AUTO_UPDATE=${isAutoUpdateEnabled}' .env` command: `sed -i '/COOLIFY_AUTO_UPDATE=/cCOOLIFY_AUTO_UPDATE=${isAutoUpdateEnabled}' .env`
}); });
await executeCommand({ await executeCommand({
shell: true, shell: true,
command: `docker run --rm -tid --env-file .env -v /var/run/docker.sock:/var/run/docker.sock -v coolify-db coollabsio/coolify:${latestVersion} /bin/sh -c "env | grep COOLIFY > .env && echo 'TAG=${latestVersion}' >> .env && docker stop -t 0 coolify coolify-fluentbit && docker rm coolify coolify-fluentbit && docker compose pull && docker compose up -d --force-recreate"` command: `docker run --rm -tid --env-file .env -v /var/run/docker.sock:/var/run/docker.sock -v coolify-db ghcr.io/coollabsio/coolify:${latestVersion} /bin/sh -c "env | grep COOLIFY > .env && echo 'TAG=${latestVersion}' >> .env && docker stop -t 0 coolify coolify-fluentbit && docker rm coolify coolify-fluentbit && docker compose pull && docker compose up -d --force-recreate"`
}); });
} }
} else { } else {

View File

@@ -110,6 +110,9 @@ import * as buildpacks from '../lib/buildPacks';
.replace(/\//gi, '-') .replace(/\//gi, '-')
.replace('-app', '')}:${storage.path}`; .replace('-app', '')}:${storage.path}`;
} }
if (storage.hostPath) {
return `${storage.hostPath}:${storage.path}`
}
return `${applicationId}${storage.path.replace(/\//gi, '-')}:${storage.path}`; return `${applicationId}${storage.path.replace(/\//gi, '-')}:${storage.path}`;
}) || []; }) || [];
@@ -160,7 +163,11 @@ import * as buildpacks from '../lib/buildPacks';
port: exposePort ? `${exposePort}:${port}` : port port: exposePort ? `${exposePort}:${port}` : port
}); });
try { try {
const composeVolumes = volumes.map((volume) => { const composeVolumes = volumes.filter(v => {
if (!v.startsWith('.') && !v.startsWith('..') && !v.startsWith('/') && !v.startsWith('~')) {
return v;
}
}).map((volume) => {
return { return {
[`${volume.split(':')[0]}`]: { [`${volume.split(':')[0]}`]: {
name: volume.split(':')[0] name: volume.split(':')[0]
@@ -381,12 +388,15 @@ import * as buildpacks from '../lib/buildPacks';
.replace(/\//gi, '-') .replace(/\//gi, '-')
.replace('-app', '')}:${storage.path}`; .replace('-app', '')}:${storage.path}`;
} }
if (storage.hostPath) {
return `${storage.hostPath}:${storage.path}`
}
return `${applicationId}${storage.path.replace(/\//gi, '-')}:${storage.path}`; return `${applicationId}${storage.path.replace(/\//gi, '-')}:${storage.path}`;
}) || []; }) || [];
try { try {
dockerComposeConfiguration = JSON.parse(dockerComposeConfiguration); dockerComposeConfiguration = JSON.parse(dockerComposeConfiguration);
} catch (error) {} } catch (error) { }
let deployNeeded = true; let deployNeeded = true;
let destinationType; let destinationType;
@@ -406,7 +416,7 @@ import * as buildpacks from '../lib/buildPacks';
installCommand = configuration.installCommand; installCommand = configuration.installCommand;
startCommand = configuration.startCommand; startCommand = configuration.startCommand;
buildCommand = configuration.buildCommand; buildCommand = configuration.buildCommand;
publishDirectory = configuration.publishDirectory; publishDirectory = configuration.publishDirectory || '';
baseDirectory = configuration.baseDirectory || ''; baseDirectory = configuration.baseDirectory || '';
dockerFileLocation = configuration.dockerFileLocation; dockerFileLocation = configuration.dockerFileLocation;
dockerComposeFileLocation = configuration.dockerComposeFileLocation; dockerComposeFileLocation = configuration.dockerComposeFileLocation;
@@ -453,7 +463,7 @@ import * as buildpacks from '../lib/buildPacks';
try { try {
await prisma.build.update({ where: { id: buildId }, data: { commit } }); await prisma.build.update({ where: { id: buildId }, data: { commit } });
} catch (err) {} } catch (err) { }
if (!pullmergeRequestId) { if (!pullmergeRequestId) {
if (configHash !== currentHash) { if (configHash !== currentHash) {
@@ -494,9 +504,8 @@ import * as buildpacks from '../lib/buildPacks';
try { try {
await executeCommand({ await executeCommand({
dockerId: destinationDocker.id, dockerId: destinationDocker.id,
command: `docker ${ command: `docker ${location ? `--config ${location}` : ''
location ? `--config ${location}` : '' } pull ${imageName}:${customTag}`
} pull ${imageName}:${customTag}`
}); });
imageFoundRemotely = true; imageFoundRemotely = true;
} catch (error) { } catch (error) {
@@ -659,9 +668,8 @@ import * as buildpacks from '../lib/buildPacks';
try { try {
const { stdout: containers } = await executeCommand({ const { stdout: containers } = await executeCommand({
dockerId: destinationDockerId, dockerId: destinationDockerId,
command: `docker ps -a --filter 'label=com.docker.compose.service=${ command: `docker ps -a --filter 'label=com.docker.compose.service=${pullmergeRequestId ? imageId : applicationId
pullmergeRequestId ? imageId : applicationId }' --format {{.ID}}`
}' --format {{.ID}}`
}); });
if (containers) { if (containers) {
const containerArray = containers.split('\n'); const containerArray = containers.split('\n');
@@ -693,7 +701,11 @@ import * as buildpacks from '../lib/buildPacks';
await saveDockerRegistryCredentials({ url, username, password, workdir }); await saveDockerRegistryCredentials({ url, username, password, workdir });
} }
try { try {
const composeVolumes = volumes.map((volume) => { const composeVolumes = volumes.filter(v => {
if (!v.startsWith('.') && !v.startsWith('..') && !v.startsWith('/') && !v.startsWith('~')) {
return v;
}
}).map((volume) => {
return { return {
[`${volume.split(':')[0]}`]: { [`${volume.split(':')[0]}`]: {
name: volume.split(':')[0] name: volume.split(':')[0]

View File

@@ -429,7 +429,12 @@ export const setDefaultConfiguration = async (data: any) => {
startCommand = template?.startCommand || 'yarn start'; startCommand = template?.startCommand || 'yarn start';
if (!buildCommand && buildPack !== 'static' && buildPack !== 'laravel') if (!buildCommand && buildPack !== 'static' && buildPack !== 'laravel')
buildCommand = template?.buildCommand || null; buildCommand = template?.buildCommand || null;
if (!publishDirectory) publishDirectory = template?.publishDirectory || null; if (!publishDirectory) {
publishDirectory = template?.publishDirectory || null;
} else {
if (!publishDirectory.startsWith('/')) publishDirectory = `/${publishDirectory}`;
if (publishDirectory.endsWith('/')) publishDirectory = publishDirectory.slice(0, -1);
}
if (baseDirectory) { if (baseDirectory) {
if (!baseDirectory.startsWith('/')) baseDirectory = `/${baseDirectory}`; if (!baseDirectory.startsWith('/')) baseDirectory = `/${baseDirectory}`;
if (baseDirectory.endsWith('/') && baseDirectory !== '/') if (baseDirectory.endsWith('/') && baseDirectory !== '/')
@@ -702,9 +707,8 @@ export async function buildImage({
buildId, buildId,
applicationId, applicationId,
dockerId, dockerId,
command: `docker ${location ? `--config ${location}` : ''} build ${ command: `docker ${location ? `--config ${location}` : ''} build ${forceRebuild ? '--no-cache' : ''
forceRebuild ? '--no-cache' : '' } --progress plain -f ${workdir}/${dockerFile} -t ${cache} --build-arg SOURCE_COMMIT=${commit} ${workdir}`
} --progress plain -f ${workdir}/${dockerFile} -t ${cache} --build-arg SOURCE_COMMIT=${commit} ${workdir}`
}); });
const { status } = await prisma.build.findUnique({ where: { id: buildId } }); const { status } = await prisma.build.findUnique({ where: { id: buildId } });

View File

@@ -36,12 +36,13 @@ export default async function (data) {
if (volumes.length > 0) { if (volumes.length > 0) {
for (const volume of volumes) { for (const volume of volumes) {
let [v, path] = volume.split(':'); let [v, path] = volume.split(':');
composeVolumes[v] = { if (!v.startsWith('.') && !v.startsWith('..') && !v.startsWith('/') && !v.startsWith('~')) {
name: v composeVolumes[v] = {
}; name: v
};
}
} }
} }
let networks = {}; let networks = {};
for (let [key, value] of Object.entries(dockerComposeYaml.services)) { for (let [key, value] of Object.entries(dockerComposeYaml.services)) {
value['container_name'] = `${applicationId}-${key}`; value['container_name'] = `${applicationId}-${key}`;
@@ -77,17 +78,54 @@ export default async function (data) {
// TODO: If we support separated volume for each service, we need to add it here // TODO: If we support separated volume for each service, we need to add it here
if (value['volumes']?.length > 0) { if (value['volumes']?.length > 0) {
value['volumes'] = value['volumes'].map((volume) => { value['volumes'] = value['volumes'].map((volume) => {
let [v, path, permission] = volume.split(':'); if (typeof volume === 'string') {
if (!path) { let [v, path, permission] = volume.split(':');
path = v; if (
v = `${applicationId}${v.replace(/\//gi, '-').replace(/\./gi, '')}`; v.startsWith('.') ||
} else { v.startsWith('..') ||
v = `${applicationId}${v.replace(/\//gi, '-').replace(/\./gi, '')}`; v.startsWith('/') ||
v.startsWith('~') ||
v.startsWith('$PWD')
) {
v = v.replace(/^\./, `~`).replace(/^\.\./, '~').replace(/^\$PWD/, '~');
} else {
if (!path) {
path = v;
v = `${applicationId}${v.replace(/\//gi, '-').replace(/\./gi, '')}`;
} else {
v = `${applicationId}${v.replace(/\//gi, '-').replace(/\./gi, '')}`;
}
composeVolumes[v] = {
name: v
};
}
return `${v}:${path}${permission ? ':' + permission : ''}`;
} }
composeVolumes[v] = { if (typeof volume === 'object') {
name: v let { source, target, mode } = volume;
}; if (
return `${v}:${path}${permission ? ':' + permission : ''}`; source.startsWith('.') ||
source.startsWith('..') ||
source.startsWith('/') ||
source.startsWith('~') ||
source.startsWith('$PWD')
) {
source = source.replace(/^\./, `~`).replace(/^\.\./, '~').replace(/^\$PWD/, '~');
console.log({source})
} else {
if (!target) {
target = source;
source = `${applicationId}${source.replace(/\//gi, '-').replace(/\./gi, '')}`;
} else {
source = `${applicationId}${source.replace(/\//gi, '-').replace(/\./gi, '')}`;
}
}
return `${source}:${target}${mode ? ':' + mode : ''}`;
}
}); });
} }
if (volumes.length > 0) { if (volumes.length > 0) {
@@ -95,7 +133,7 @@ export default async function (data) {
value['volumes'].push(volume); value['volumes'].push(volume);
} }
} }
if (dockerComposeConfiguration[key].port) { if (dockerComposeConfiguration[key]?.port) {
value['expose'] = [dockerComposeConfiguration[key].port]; value['expose'] = [dockerComposeConfiguration[key].port];
} }
if (value['networks']?.length > 0) { if (value['networks']?.length > 0) {

View File

@@ -8,7 +8,8 @@ const createDockerfile = async (data, imageforBuild): Promise<void> => {
Dockerfile.push(`FROM ${imageforBuild}`); Dockerfile.push(`FROM ${imageforBuild}`);
Dockerfile.push('WORKDIR /app'); Dockerfile.push('WORKDIR /app');
Dockerfile.push(`LABEL coolify.buildId=${buildId}`); Dockerfile.push(`LABEL coolify.buildId=${buildId}`);
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
if (baseImage?.includes('nginx')) { if (baseImage?.includes('nginx')) {
Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`); Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`);
} }

View File

@@ -30,6 +30,7 @@ const createDockerfile = async (data, image): Promise<void> => {
`COPY --chown=application:application --from=${applicationId}:${tag}-cache /app/mix-manifest.json /app/public/mix-manifest.json` `COPY --chown=application:application --from=${applicationId}:${tag}-cache /app/mix-manifest.json /app/public/mix-manifest.json`
); );
Dockerfile.push(`COPY --chown=application:application . ./`); Dockerfile.push(`COPY --chown=application:application . ./`);
Dockerfile.push('RUN rm -fr .git');
Dockerfile.push(`EXPOSE ${port}`); Dockerfile.push(`EXPOSE ${port}`);
await fs.writeFile(`${workdir}/Dockerfile`, Dockerfile.join('\n')); await fs.writeFile(`${workdir}/Dockerfile`, Dockerfile.join('\n'));
}; };

View File

@@ -2,7 +2,7 @@ import { promises as fs } from 'fs';
import { buildCacheImageWithNode, buildImage } from './common'; import { buildCacheImageWithNode, buildImage } from './common';
const createDockerfile = async (data, image): Promise<void> => { const createDockerfile = async (data, image): Promise<void> => {
const { buildId, applicationId, tag, port, startCommand, workdir, baseDirectory } = data; const { buildId, applicationId, tag, port, startCommand, workdir, publishDirectory } = data;
const Dockerfile: Array<string> = []; const Dockerfile: Array<string> = [];
const isPnpm = startCommand.includes('pnpm'); const isPnpm = startCommand.includes('pnpm');
@@ -12,8 +12,8 @@ const createDockerfile = async (data, image): Promise<void> => {
if (isPnpm) { if (isPnpm) {
Dockerfile.push('RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm@7'); Dockerfile.push('RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm@7');
} }
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${baseDirectory || ''} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
Dockerfile.push(`EXPOSE ${port}`); Dockerfile.push(`EXPOSE ${port}`);
Dockerfile.push(`CMD ${startCommand}`); Dockerfile.push(`CMD ${startCommand}`);
await fs.writeFile(`${workdir}/Dockerfile`, Dockerfile.join('\n')); await fs.writeFile(`${workdir}/Dockerfile`, Dockerfile.join('\n'));

View File

@@ -42,7 +42,8 @@ const createDockerfile = async (data, image): Promise<void> => {
if (baseImage?.includes('nginx')) { if (baseImage?.includes('nginx')) {
Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`); Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`);
} }
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
Dockerfile.push(`EXPOSE 80`); Dockerfile.push(`EXPOSE 80`);
} }

View File

@@ -29,6 +29,7 @@ const createDockerfile = async (data, image): Promise<void> => {
Dockerfile.push('RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm@7'); Dockerfile.push('RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm@7');
} }
Dockerfile.push(`COPY .${baseDirectory || ''} ./`); Dockerfile.push(`COPY .${baseDirectory || ''} ./`);
Dockerfile.push('RUN rm -fr .git');
Dockerfile.push(`RUN ${installCommand}`); Dockerfile.push(`RUN ${installCommand}`);
if (buildCommand) { if (buildCommand) {
Dockerfile.push(`RUN ${buildCommand}`); Dockerfile.push(`RUN ${buildCommand}`);

View File

@@ -42,7 +42,8 @@ const createDockerfile = async (data, image): Promise<void> => {
if (baseImage?.includes('nginx')) { if (baseImage?.includes('nginx')) {
Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`); Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`);
} }
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
Dockerfile.push(`EXPOSE 80`); Dockerfile.push(`EXPOSE 80`);
} }

View File

@@ -8,7 +8,8 @@ const createDockerfile = async (data, image): Promise<void> => {
Dockerfile.push(`FROM ${image}`); Dockerfile.push(`FROM ${image}`);
Dockerfile.push(`LABEL coolify.buildId=${buildId}`); Dockerfile.push(`LABEL coolify.buildId=${buildId}`);
Dockerfile.push('WORKDIR /app'); Dockerfile.push('WORKDIR /app');
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
if (baseImage?.includes('nginx')) { if (baseImage?.includes('nginx')) {
Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`); Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`);
} }

View File

@@ -20,6 +20,7 @@ const createDockerfile = async (data, image, name): Promise<void> => {
); );
Dockerfile.push(`RUN update-ca-certificates`); Dockerfile.push(`RUN update-ca-certificates`);
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/target/release/${name} ${name}`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/target/release/${name} ${name}`);
Dockerfile.push('RUN rm -fr .git');
Dockerfile.push(`EXPOSE ${port}`); Dockerfile.push(`EXPOSE ${port}`);
Dockerfile.push(`CMD ["/app/${name}"]`); Dockerfile.push(`CMD ["/app/${name}"]`);
await fs.writeFile(`${workdir}/Dockerfile`, Dockerfile.join('\n')); await fs.writeFile(`${workdir}/Dockerfile`, Dockerfile.join('\n'));

View File

@@ -31,7 +31,8 @@ const createDockerfile = async (data, image): Promise<void> => {
}); });
} }
if (buildCommand) { if (buildCommand) {
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
} else { } else {
Dockerfile.push(`COPY .${baseDirectory || ''} ./`); Dockerfile.push(`COPY .${baseDirectory || ''} ./`);
} }

View File

@@ -8,7 +8,8 @@ const createDockerfile = async (data, image): Promise<void> => {
Dockerfile.push(`FROM ${image}`); Dockerfile.push(`FROM ${image}`);
Dockerfile.push('WORKDIR /app'); Dockerfile.push('WORKDIR /app');
Dockerfile.push(`LABEL coolify.buildId=${buildId}`); Dockerfile.push(`LABEL coolify.buildId=${buildId}`);
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
if (baseImage?.includes('nginx')) { if (baseImage?.includes('nginx')) {
Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`); Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`);
} }

View File

@@ -8,7 +8,8 @@ const createDockerfile = async (data, image): Promise<void> => {
Dockerfile.push(`FROM ${image}`); Dockerfile.push(`FROM ${image}`);
Dockerfile.push('WORKDIR /app'); Dockerfile.push('WORKDIR /app');
Dockerfile.push(`LABEL coolify.buildId=${buildId}`); Dockerfile.push(`LABEL coolify.buildId=${buildId}`);
Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app/${publishDirectory} ./`); Dockerfile.push(`COPY --from=${applicationId}:${tag}-cache /app${publishDirectory} ./`);
Dockerfile.push('RUN rm -fr .git');
if (baseImage?.includes('nginx')) { if (baseImage?.includes('nginx')) {
Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`); Dockerfile.push(`COPY /nginx.conf /etc/nginx/nginx.conf`);
} }

View File

@@ -11,15 +11,15 @@ import { promises as dns } from 'dns';
import * as Sentry from '@sentry/node'; import * as Sentry from '@sentry/node';
import { PrismaClient } from '@prisma/client'; import { PrismaClient } from '@prisma/client';
import os from 'os'; import os from 'os';
import sshConfig from 'ssh-config'; import * as SSHConfig from 'ssh-config/src/ssh-config';
import jsonwebtoken from 'jsonwebtoken'; import jsonwebtoken from 'jsonwebtoken';
import { checkContainer, removeContainer } from './docker'; import { checkContainer, removeContainer } from './docker';
import { day } from './dayjs'; import { day } from './dayjs';
import { saveBuildLog, saveDockerRegistryCredentials } from './buildPacks/common'; import { saveBuildLog } from './buildPacks/common';
import { scheduler } from './scheduler'; import { scheduler } from './scheduler';
import type { ExecaChildProcess } from 'execa'; import type { ExecaChildProcess } from 'execa';
export const version = '3.12.20'; export const version = '3.12.29';
export const isDev = process.env.NODE_ENV === 'development'; export const isDev = process.env.NODE_ENV === 'development';
export const proxyPort = process.env.COOLIFY_PROXY_PORT; export const proxyPort = process.env.COOLIFY_PROXY_PORT;
export const proxySecurePort = process.env.COOLIFY_PROXY_SECURE_PORT; export const proxySecurePort = process.env.COOLIFY_PROXY_SECURE_PORT;
@@ -402,8 +402,8 @@ export const supportedDatabaseTypesAndVersions = [
fancyName: 'MongoDB', fancyName: 'MongoDB',
baseImage: 'bitnami/mongodb', baseImage: 'bitnami/mongodb',
baseImageARM: 'mongo', baseImageARM: 'mongo',
versions: ['5.0', '4.4', '4.2'], versions: ['6.0', '5.0', '4.4', '4.2'],
versionsARM: ['5.0', '4.4', '4.2'] versionsARM: ['6.0', '5.0', '4.4', '4.2']
}, },
{ {
name: 'mysql', name: 'mysql',
@@ -418,16 +418,16 @@ export const supportedDatabaseTypesAndVersions = [
fancyName: 'MariaDB', fancyName: 'MariaDB',
baseImage: 'bitnami/mariadb', baseImage: 'bitnami/mariadb',
baseImageARM: 'mariadb', baseImageARM: 'mariadb',
versions: ['10.8', '10.7', '10.6', '10.5', '10.4', '10.3', '10.2'], versions: ['10.11', '10.10', '10.9', '10.8', '10.7', '10.6', '10.5', '10.4', '10.3', '10.2'],
versionsARM: ['10.8', '10.7', '10.6', '10.5', '10.4', '10.3', '10.2'] versionsARM: ['10.11', '10.10', '10.9', '10.8', '10.7', '10.6', '10.5', '10.4', '10.3', '10.2']
}, },
{ {
name: 'postgresql', name: 'postgresql',
fancyName: 'PostgreSQL', fancyName: 'PostgreSQL',
baseImage: 'bitnami/postgresql', baseImage: 'bitnami/postgresql',
baseImageARM: 'postgres', baseImageARM: 'postgres',
versions: ['14.5.0', '13.8.0', '12.12.0', '11.17.0', '10.22.0'], versions: ['15.2.0', '14.7.0', '14.5.0', '13.8.0', '12.12.0', '11.17.0', '10.22.0'],
versionsARM: ['14.5', '13.8', '12.12', '11.17', '10.22'] versionsARM: ['15.2', '14.7', '14.5', '13.8', '12.12', '11.17', '10.22']
}, },
{ {
name: 'redis', name: 'redis',
@@ -442,14 +442,14 @@ export const supportedDatabaseTypesAndVersions = [
fancyName: 'CouchDB', fancyName: 'CouchDB',
baseImage: 'bitnami/couchdb', baseImage: 'bitnami/couchdb',
baseImageARM: 'couchdb', baseImageARM: 'couchdb',
versions: ['3.2.2', '3.1.2', '2.3.1'], versions: ['3.3.1', '3.2.2', '3.1.2', '2.3.1'],
versionsARM: ['3.2.2', '3.1.2', '2.3.1'] versionsARM: ['3.3', '3.2.2', '3.1.2', '2.3.1']
}, },
{ {
name: 'edgedb', name: 'edgedb',
fancyName: 'EdgeDB', fancyName: 'EdgeDB',
baseImage: 'edgedb/edgedb', baseImage: 'edgedb/edgedb',
versions: ['latest', '2.1', '2.0', '1.4'] versions: ['latest', '2.9', '2.8', '2.7']
} }
]; ];
@@ -498,33 +498,56 @@ export async function getFreeSSHLocalPort(id: string): Promise<number | boolean>
return false; return false;
} }
/**
* Update the ssh config file with a host
*
* @param id Destination ID
* @returns
*/
export async function createRemoteEngineConfiguration(id: string) { export async function createRemoteEngineConfiguration(id: string) {
const homedir = os.homedir();
const sshKeyFile = `/tmp/id_rsa-${id}`; const sshKeyFile = `/tmp/id_rsa-${id}`;
const localPort = await getFreeSSHLocalPort(id); const localPort = await getFreeSSHLocalPort(id);
const { const {
sshKey: { privateKey }, sshKey: { privateKey },
network,
remoteIpAddress, remoteIpAddress,
remotePort, remotePort,
remoteUser remoteUser
} = await prisma.destinationDocker.findFirst({ where: { id }, include: { sshKey: true } }); } = await prisma.destinationDocker.findFirst({ where: { id }, include: { sshKey: true } });
// Write new keyfile
await fs.writeFile(sshKeyFile, decrypt(privateKey) + '\n', { encoding: 'utf8', mode: 400 }); await fs.writeFile(sshKeyFile, decrypt(privateKey) + '\n', { encoding: 'utf8', mode: 400 });
const config = sshConfig.parse('');
const Host = `${remoteIpAddress}-remote`; const Host = `${remoteIpAddress}-remote`;
// Removes previous ssh-keys
try { try {
await executeCommand({ command: `ssh-keygen -R ${Host}` }); await executeCommand({ command: `ssh-keygen -R ${Host}` });
await executeCommand({ command: `ssh-keygen -R ${remoteIpAddress}` }); await executeCommand({ command: `ssh-keygen -R ${remoteIpAddress}` });
await executeCommand({ command: `ssh-keygen -R localhost:${localPort}` }); await executeCommand({ command: `ssh-keygen -R localhost:${localPort}` });
} catch (error) { } } catch (error) {
//
}
const homedir = os.homedir();
let currentConfigFileContent = '';
try {
// Read the current config file
currentConfigFileContent = (await fs.readFile(`${homedir}/.ssh/config`)).toString();
} catch (error) {
// File doesn't exist, so we do nothing, a new one is going to be created
}
// Parse the config file
const config = SSHConfig.parse(currentConfigFileContent);
// Remove current config for the given host
const found = config.find({ Host }); const found = config.find({ Host });
const foundIp = config.find({ Host: remoteIpAddress }); const foundIp = config.find({ Host: remoteIpAddress });
if (found) config.remove({ Host }); if (found) config.remove({ Host });
if (foundIp) config.remove({ Host: remoteIpAddress }); if (foundIp) config.remove({ Host: remoteIpAddress });
// Create the new config
config.append({ config.append({
Host, Host,
Hostname: remoteIpAddress, Hostname: remoteIpAddress,
@@ -537,13 +560,17 @@ export async function createRemoteEngineConfiguration(id: string) {
ControlPersist: '10m' ControlPersist: '10m'
}); });
// Check if .ssh folder exists, and if not create one
try { try {
await fs.stat(`${homedir}/.ssh/`); await fs.stat(`${homedir}/.ssh/`);
} catch (error) { } catch (error) {
await fs.mkdir(`${homedir}/.ssh/`); await fs.mkdir(`${homedir}/.ssh/`);
} }
return await fs.writeFile(`${homedir}/.ssh/config`, sshConfig.stringify(config));
// Write the config
return await fs.writeFile(`${homedir}/.ssh/config`, SSHConfig.stringify(config));
} }
export async function executeCommand({ export async function executeCommand({
command, command,
dockerId = null, dockerId = null,
@@ -822,97 +849,97 @@ export function generatePassword({
type DatabaseConfiguration = type DatabaseConfiguration =
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
MYSQL_DATABASE: string; MYSQL_DATABASE: string;
MYSQL_PASSWORD: string; MYSQL_PASSWORD: string;
MYSQL_ROOT_USER: string; MYSQL_ROOT_USER: string;
MYSQL_USER: string; MYSQL_USER: string;
MYSQL_ROOT_PASSWORD: string; MYSQL_ROOT_PASSWORD: string;
}; };
} }
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
MONGO_INITDB_ROOT_USERNAME?: string; MONGO_INITDB_ROOT_USERNAME?: string;
MONGO_INITDB_ROOT_PASSWORD?: string; MONGO_INITDB_ROOT_PASSWORD?: string;
MONGODB_ROOT_USER?: string; MONGODB_ROOT_USER?: string;
MONGODB_ROOT_PASSWORD?: string; MONGODB_ROOT_PASSWORD?: string;
}; };
} }
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
MARIADB_ROOT_USER: string; MARIADB_ROOT_USER: string;
MARIADB_ROOT_PASSWORD: string; MARIADB_ROOT_PASSWORD: string;
MARIADB_USER: string; MARIADB_USER: string;
MARIADB_PASSWORD: string; MARIADB_PASSWORD: string;
MARIADB_DATABASE: string; MARIADB_DATABASE: string;
}; };
} }
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
POSTGRES_PASSWORD?: string; POSTGRES_PASSWORD?: string;
POSTGRES_USER?: string; POSTGRES_USER?: string;
POSTGRES_DB?: string; POSTGRES_DB?: string;
POSTGRESQL_POSTGRES_PASSWORD?: string; POSTGRESQL_POSTGRES_PASSWORD?: string;
POSTGRESQL_USERNAME?: string; POSTGRESQL_USERNAME?: string;
POSTGRESQL_PASSWORD?: string; POSTGRESQL_PASSWORD?: string;
POSTGRESQL_DATABASE?: string; POSTGRESQL_DATABASE?: string;
}; };
} }
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
REDIS_AOF_ENABLED: string; REDIS_AOF_ENABLED: string;
REDIS_PASSWORD: string; REDIS_PASSWORD: string;
}; };
} }
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
COUCHDB_PASSWORD: string; COUCHDB_PASSWORD: string;
COUCHDB_USER: string; COUCHDB_USER: string;
}; };
} }
| { | {
volume: string; volume: string;
image: string; image: string;
command?: string; command?: string;
ulimits: Record<string, unknown>; ulimits: Record<string, unknown>;
privatePort: number; privatePort: number;
environmentVariables: { environmentVariables: {
EDGEDB_SERVER_PASSWORD: string; EDGEDB_SERVER_PASSWORD: string;
EDGEDB_SERVER_USER: string; EDGEDB_SERVER_USER: string;
EDGEDB_SERVER_DATABASE: string; EDGEDB_SERVER_DATABASE: string;
EDGEDB_SERVER_TLS_CERT_MODE: string; EDGEDB_SERVER_TLS_CERT_MODE: string;
}; };
}; };
export function generateDatabaseConfiguration(database: any): DatabaseConfiguration { export function generateDatabaseConfiguration(database: any): DatabaseConfiguration {
const { id, dbUser, dbUserPassword, rootUser, rootUserPassword, defaultDatabase, version, type } = const { id, dbUser, dbUserPassword, rootUser, rootUserPassword, defaultDatabase, version, type } =
database; database;
@@ -986,7 +1013,7 @@ export function generateDatabaseConfiguration(database: any): DatabaseConfigurat
ulimits: {} ulimits: {}
}; };
if (isARM()) { if (isARM()) {
configuration.volume = `${id}-${type}-data:/var/lib/postgresql`; configuration.volume = `${id}-${type}-data:/var/lib/postgresql/data`;
configuration.environmentVariables = { configuration.environmentVariables = {
POSTGRES_PASSWORD: dbUserPassword, POSTGRES_PASSWORD: dbUserPassword,
POSTGRES_USER: dbUser, POSTGRES_USER: dbUser,
@@ -1011,8 +1038,9 @@ export function generateDatabaseConfiguration(database: any): DatabaseConfigurat
}; };
if (isARM()) { if (isARM()) {
configuration.volume = `${id}-${type}-data:/data`; configuration.volume = `${id}-${type}-data:/data`;
configuration.command = `/usr/local/bin/redis-server --appendonly ${appendOnly ? 'yes' : 'no' configuration.command = `/usr/local/bin/redis-server --appendonly ${
} --requirepass ${dbUserPassword}`; appendOnly ? 'yes' : 'no'
} --requirepass ${dbUserPassword}`;
} }
return configuration; return configuration;
} else if (type === 'couchdb') { } else if (type === 'couchdb') {
@@ -1048,7 +1076,7 @@ export function generateDatabaseConfiguration(database: any): DatabaseConfigurat
} }
export function isARM() { export function isARM() {
const arch = process.arch; const arch = process.arch;
if (arch === 'arm' || arch === 'arm64') { if (arch === 'arm' || arch === 'arm64' || arch === 'aarch' || arch === 'aarch64') {
return true; return true;
} }
return false; return false;
@@ -1097,12 +1125,12 @@ export type ComposeFileService = {
command?: string; command?: string;
ports?: string[]; ports?: string[];
build?: build?:
| { | {
context: string; context: string;
dockerfile: string; dockerfile: string;
args?: Record<string, unknown>; args?: Record<string, unknown>;
} }
| string; | string;
deploy?: { deploy?: {
restart_policy?: { restart_policy?: {
condition?: string; condition?: string;
@@ -1173,7 +1201,7 @@ export const createDirectories = async ({
let workdirFound = false; let workdirFound = false;
try { try {
workdirFound = !!(await fs.stat(workdir)); workdirFound = !!(await fs.stat(workdir));
} catch (error) { } } catch (error) {}
if (workdirFound) { if (workdirFound) {
await executeCommand({ command: `rm -fr ${workdir}` }); await executeCommand({ command: `rm -fr ${workdir}` });
} }
@@ -1633,6 +1661,9 @@ export function errorHandler({
type?: string | null; type?: string | null;
}) { }) {
if (message.message) message = message.message; if (message.message) message = message.message;
if (message.includes('Unique constraint failed')) {
message = 'This data is unique and already exists. Please try again with a different value.';
}
if (type === 'normal') { if (type === 'normal') {
Sentry.captureException(message); Sentry.captureException(message);
} }
@@ -1697,7 +1728,7 @@ export async function stopBuild(buildId, applicationId) {
} }
} }
count++; count++;
} catch (error) { } } catch (error) {}
}, 100); }, 100);
}); });
} }
@@ -1720,7 +1751,7 @@ export async function cleanupDockerStorage(dockerId) {
// Cleanup images that are not used by any container // Cleanup images that are not used by any container
try { try {
await executeCommand({ dockerId, command: `docker image prune -af` }); await executeCommand({ dockerId, command: `docker image prune -af` });
} catch (error) { } } catch (error) {}
// Prune coolify managed containers // Prune coolify managed containers
try { try {
@@ -1728,12 +1759,12 @@ export async function cleanupDockerStorage(dockerId) {
dockerId, dockerId,
command: `docker container prune -f --filter "label=coolify.managed=true"` command: `docker container prune -f --filter "label=coolify.managed=true"`
}); });
} catch (error) { } } catch (error) {}
// Cleanup build caches // Cleanup build caches
try { try {
await executeCommand({ dockerId, command: `docker builder prune -af` }); await executeCommand({ dockerId, command: `docker builder prune -af` });
} catch (error) { } } catch (error) {}
} }
export function persistentVolumes(id, persistentStorage, config) { export function persistentVolumes(id, persistentStorage, config) {

View File

@@ -50,24 +50,12 @@ export async function startService(request: FastifyRequest<ServiceStartStop>, fa
const config = {}; const config = {};
for (const s in template.services) { for (const s in template.services) {
let newEnvironments = [] let newEnvironments = []
if (arm) { if (template.services[s]?.environment?.length > 0) {
if (template.services[s]?.environmentArm?.length > 0) { for (const environment of template.services[s].environment) {
for (const environment of template.services[s].environmentArm) { let [env, ...value] = environment.split("=");
let [env, ...value] = environment.split("="); value = value.join("=")
value = value.join("=") if (!value.startsWith('$$secret') && value !== '') {
if (!value.startsWith('$$secret') && value !== '') { newEnvironments.push(`${env}=${value}`)
newEnvironments.push(`${env}=${value}`)
}
}
}
} else {
if (template.services[s]?.environment?.length > 0) {
for (const environment of template.services[s].environment) {
let [env, ...value] = environment.split("=");
value = value.join("=")
if (!value.startsWith('$$secret') && value !== '') {
newEnvironments.push(`${env}=${value}`)
}
} }
} }
} }
@@ -87,12 +75,13 @@ export async function startService(request: FastifyRequest<ServiceStartStop>, fa
} }
const customVolumes = await prisma.servicePersistentStorage.findMany({ where: { serviceId: id } }) const customVolumes = await prisma.servicePersistentStorage.findMany({ where: { serviceId: id } })
let volumes = new Set() let volumes = new Set()
if (arm) { if (arm && template.services[s]?.volumesArm?.length > 0) {
template.services[s]?.volumesArm && template.services[s].volumesArm.length > 0 && template.services[s].volumesArm.forEach(v => volumes.add(v)) template.services[s].volumesArm.forEach(v => volumes.add(v))
} else { } else {
template.services[s]?.volumes && template.services[s].volumes.length > 0 && template.services[s].volumes.forEach(v => volumes.add(v)) if (template.services[s]?.volumes?.length > 0) {
template.services[s].volumes.forEach(v => volumes.add(v))
}
} }
// Workaround: old plausible analytics service wrong volume id name // Workaround: old plausible analytics service wrong volume id name
if (service.type === 'plausibleanalytics' && service.plausibleAnalytics?.id) { if (service.type === 'plausibleanalytics' && service.plausibleAnalytics?.id) {
let temp = Array.from(volumes) let temp = Array.from(volumes)

View File

@@ -736,7 +736,7 @@ export async function deleteApplication(
where: { id }, where: { id },
include: { destinationDocker: true, teams: true } include: { destinationDocker: true, teams: true }
}); });
if (teamId !== '0' || !application.teams.some((team) => team.id === teamId)) { if (teamId !== '0' && !application.teams.some((team) => team.id === teamId)) {
throw { status: 403, message: 'You are not allowed to delete this application.' }; throw { status: 403, message: 'You are not allowed to delete this application.' };
} }
if (application?.destinationDocker?.id && application.destinationDocker?.network) { if (application?.destinationDocker?.id && application.destinationDocker?.network) {
@@ -1340,16 +1340,16 @@ export async function getStorages(request: FastifyRequest<OnlyId>) {
export async function saveStorage(request: FastifyRequest<SaveStorage>, reply: FastifyReply) { export async function saveStorage(request: FastifyRequest<SaveStorage>, reply: FastifyReply) {
try { try {
const { id } = request.params; const { id } = request.params;
const { path, newStorage, storageId } = request.body; const { hostPath, path, newStorage, storageId } = request.body;
if (newStorage) { if (newStorage) {
await prisma.applicationPersistentStorage.create({ await prisma.applicationPersistentStorage.create({
data: { path, application: { connect: { id } } } data: { hostPath, path, application: { connect: { id } } }
}); });
} else { } else {
await prisma.applicationPersistentStorage.update({ await prisma.applicationPersistentStorage.update({
where: { id: storageId }, where: { id: storageId },
data: { path } data: { hostPath, path }
}); });
} }
return reply.code(201).send(); return reply.code(201).send();

View File

@@ -96,6 +96,7 @@ export interface DeleteSecret extends OnlyId {
} }
export interface SaveStorage extends OnlyId { export interface SaveStorage extends OnlyId {
Body: { Body: {
hostPath?: string;
path: string; path: string;
newStorage: boolean; newStorage: boolean;
storageId: string; storageId: string;

View File

@@ -302,7 +302,7 @@ export async function startDatabase(request: FastifyRequest<OnlyId>) {
databaseSecret databaseSecret
} = database; } = database;
const { privatePort, command, environmentVariables, image, volume, ulimits } = const { privatePort, command, environmentVariables, image, volume, ulimits } =
generateDatabaseConfiguration(database, arch); generateDatabaseConfiguration(database);
const network = destinationDockerId && destinationDocker.network; const network = destinationDockerId && destinationDocker.network;
const volumeName = volume.split(':')[0]; const volumeName = volume.split(':')[0];

View File

@@ -156,14 +156,21 @@ export async function update(request: FastifyRequest<Update>) {
try { try {
if (!isDev) { if (!isDev) {
const { isAutoUpdateEnabled } = await prisma.setting.findFirst(); const { isAutoUpdateEnabled } = await prisma.setting.findFirst();
await executeCommand({ command: `docker pull coollabsio/coolify:${latestVersion}` }); let image = `ghcr.io/coollabsio/coolify:${latestVersion}`;
try {
await executeCommand({ command: `docker pull ${image}` });
} catch (error) {
image = `coollabsio/coolify:${latestVersion}`;
await executeCommand({ command: `docker pull ${image}` });
}
await executeCommand({ shell: true, command: `env | grep COOLIFY > .env` }); await executeCommand({ shell: true, command: `env | grep COOLIFY > .env` });
await executeCommand({ await executeCommand({
command: `sed -i '/COOLIFY_AUTO_UPDATE=/cCOOLIFY_AUTO_UPDATE=${isAutoUpdateEnabled}' .env` command: `sed -i '/COOLIFY_AUTO_UPDATE=/cCOOLIFY_AUTO_UPDATE=${isAutoUpdateEnabled}' .env`
}); });
await executeCommand({ await executeCommand({
shell: true, shell: true,
command: `docker run --rm -tid --env-file .env -v /var/run/docker.sock:/var/run/docker.sock -v coolify-db coollabsio/coolify:${latestVersion} /bin/sh -c "env | grep COOLIFY > .env && echo 'TAG=${latestVersion}' >> .env && docker stop -t 0 coolify coolify-fluentbit && docker rm coolify coolify-fluentbit && docker compose pull && docker compose up -d --force-recreate"` command: `docker run --rm -tid --env-file .env -v /var/run/docker.sock:/var/run/docker.sock -v coolify-db ${image} /bin/sh -c "env | grep COOLIFY > .env && echo 'TAG=${latestVersion}' >> .env && docker stop -t 0 coolify coolify-fluentbit && docker rm coolify coolify-fluentbit && docker compose pull && docker compose up -d --force-recreate"`
}); });
return {}; return {};
} else { } else {

View File

@@ -543,6 +543,9 @@ export async function proxyConfiguration(request: FastifyRequest<OnlyId>, remote
const template: any = await parseAndFindServiceTemplates(service, null, true); const template: any = await parseAndFindServiceTemplates(service, null, true);
const { proxy } = template.services[oneService] || found.services[oneService]; const { proxy } = template.services[oneService] || found.services[oneService];
for (let configuration of proxy) { for (let configuration of proxy) {
if (configuration.hostPort) {
continue;
}
if (configuration.domain) { if (configuration.domain) {
const setting = serviceSetting.find( const setting = serviceSetting.find(
(a) => a.variableName === configuration.domain (a) => a.variableName === configuration.domain

File diff suppressed because one or more lines are too long

View File

@@ -12,6 +12,7 @@
import { errorNotification } from '$lib/common'; import { errorNotification } from '$lib/common';
import { addToast } from '$lib/store'; import { addToast } from '$lib/store';
import CopyVolumeField from '$lib/components/CopyVolumeField.svelte'; import CopyVolumeField from '$lib/components/CopyVolumeField.svelte';
import SimpleExplainer from '$lib/components/SimpleExplainer.svelte';
const { id } = $page.params; const { id } = $page.params;
let isHttps = browser && window.location.protocol === 'https:'; let isHttps = browser && window.location.protocol === 'https:';
export let value: string; export let value: string;
@@ -33,11 +34,13 @@
storage.path.replace(/\/\//g, '/'); storage.path.replace(/\/\//g, '/');
await post(`/applications/${id}/storages`, { await post(`/applications/${id}/storages`, {
path: storage.path, path: storage.path,
hostPath: storage.hostPath,
storageId: storage.id, storageId: storage.id,
newStorage newStorage
}); });
dispatch('refresh'); dispatch('refresh');
if (isNew) { if (isNew) {
storage.hostPath = null;
storage.path = null; storage.path = null;
storage.id = null; storage.id = null;
} }
@@ -80,27 +83,42 @@
<div class="flex gap-4 pb-2" class:pt-8={isNew}> <div class="flex gap-4 pb-2" class:pt-8={isNew}>
{#if storage.applicationId} {#if storage.applicationId}
{#if storage.oldPath} {#if storage.oldPath}
<CopyVolumeField
<CopyVolumeField value="{storage.applicationId}{storage.path.replace(/\//gi, '-').replace('-app', '')}"
/>
{:else if !storage.hostPath}
<CopyVolumeField
value="{storage.applicationId}{storage.path.replace(/\//gi, '-').replace('-app', '')}" value="{storage.applicationId}{storage.path.replace(/\//gi, '-').replace('-app', '')}"
/>
{:else}
<CopyVolumeField
value="{storage.applicationId}{storage.path.replace(/\//gi, '-').replace('-app', '')}"
/> />
{/if} {/if}
{/if} {/if}
{#if isNew}
<div class="w-full">
<input
disabled={!isNew}
readonly={!isNew}
bind:value={storage.hostPath}
placeholder="Host path, example: ~/.directory"
/>
<SimpleExplainer
text="You can mount <span class='text-yellow-400 font-bold'>host paths</span> from the operating system.<br>Leave it empty to define a volume based volume."
/>
</div>
{:else if storage.hostPath}
<input disabled readonly value={storage.hostPath} />
{/if}
<input <input
disabled={!isNew} disabled={!isNew}
readonly={!isNew} readonly={!isNew}
class="w-full" class="w-full"
bind:value={storage.path} bind:value={storage.path}
required required
placeholder="eg: /data" placeholder="Mount point inside the container, example: /data"
/> />
<div class="flex items-center justify-center"> <div class="flex items-start justify-center">
{#if isNew} {#if isNew}
<div class="w-full lg:w-64"> <div class="w-full lg:w-64">
<button class="btn btn-sm btn-primary w-full" on:click={() => saveStorage(true)} <button class="btn btn-sm btn-primary w-full" on:click={() => saveStorage(true)}

View File

@@ -427,29 +427,6 @@
</svg> Stop </svg> Stop
</button> </button>
{:else if $isDeploymentEnabled && !$page.url.pathname.startsWith(`/applications/${id}/configuration/`)} {:else if $isDeploymentEnabled && !$page.url.pathname.startsWith(`/applications/${id}/configuration/`)}
{#if $status.application.overallStatus === 'degraded'}
<button
on:click={stopApplication}
type="submit"
disabled={!$isDeploymentEnabled || !$appSession.isAdmin}
class="btn btn-sm gap-2"
>
<svg
xmlns="http://www.w3.org/2000/svg"
class="w-6 h-6 text-error"
viewBox="0 0 24 24"
stroke-width="1.5"
stroke="currentColor"
fill="none"
stroke-linecap="round"
stroke-linejoin="round"
>
<path stroke="none" d="M0 0h24v24H0z" fill="none" />
<rect x="6" y="5" width="4" height="14" rx="1" />
<rect x="14" y="5" width="4" height="14" rx="1" />
</svg> Stop
</button>
{/if}
<button <button
class="btn btn-sm gap-2" class="btn btn-sm gap-2"
disabled={!$isDeploymentEnabled || !$appSession.isAdmin} disabled={!$isDeploymentEnabled || !$appSession.isAdmin}
@@ -493,6 +470,29 @@
: 'Redeploy Stack' : 'Redeploy Stack'
: 'Deploy'} : 'Deploy'}
</button> </button>
{#if $status.application.overallStatus === 'degraded'}
<button
on:click={stopApplication}
type="submit"
disabled={!$isDeploymentEnabled || !$appSession.isAdmin}
class="btn btn-sm gap-2"
>
<svg
xmlns="http://www.w3.org/2000/svg"
class="w-6 h-6 text-error"
viewBox="0 0 24 24"
stroke-width="1.5"
stroke="currentColor"
fill="none"
stroke-linecap="round"
stroke-linejoin="round"
>
<path stroke="none" d="M0 0h24v24H0z" fill="none" />
<rect x="6" y="5" width="4" height="14" rx="1" />
<rect x="14" y="5" width="4" height="14" rx="1" />
</svg> Stop
</button>
{/if}
{/if} {/if}
{#if $location && $status.application.overallStatus === 'healthy'} {#if $location && $status.application.overallStatus === 'healthy'}
<a href={$location} target="_blank noreferrer" class="btn btn-sm gap-2 text-sm bg-primary" <a href={$location} target="_blank noreferrer" class="btn btn-sm gap-2 text-sm bg-primary"

View File

@@ -406,7 +406,7 @@
> >
{#if tryAgain} {#if tryAgain}
<div class="p-5"> <div class="p-5">
An error occured during authenticating with GitLab. Please check your GitLab Source An error occurred during authenticating with GitLab. Please check your GitLab Source
configuration <a href={`/sources/${application.gitSource.id}`}>here.</a> configuration <a href={`/sources/${application.gitSource.id}`}>here.</a>
</div> </div>
<button <button

View File

@@ -158,7 +158,7 @@
id="dockerImage" id="dockerImage"
name="dockerImage" name="dockerImage"
required required
placeholder="coollabsio/coolify:0.0.1" placeholder="ghcr.io/coollabsio/coolify:0.0.1"
bind:value={remoteImage} bind:value={remoteImage}
/> />
<button class="btn btn-sm btn-primary" type="submit">Revert Now</button> <button class="btn btn-sm btn-primary" type="submit">Revert Now</button>

View File

@@ -35,17 +35,46 @@
for (const [_, service] of Object.entries(composeJson.services)) { for (const [_, service] of Object.entries(composeJson.services)) {
if (service?.volumes) { if (service?.volumes) {
for (const [_, volumeName] of Object.entries(service.volumes)) { for (const [_, volumeName] of Object.entries(service.volumes)) {
let [volume, target] = volumeName.split(':'); if (typeof volumeName === 'string') {
if (volume === '.') { let [volume, target] = volumeName.split(':');
volume = target; if (
volume.startsWith('.') ||
volume.startsWith('..') ||
volume.startsWith('/') ||
volume.startsWith('~') ||
volume.startsWith('$PWD')
) {
volume = volume.replace(/^\./, `~`).replace(/^\.\./, '~').replace(/^\$PWD/, '~');
} else {
if (!target) {
target = volume;
volume = `${application.id}${volume.replace(/\//gi, '-').replace(/\./gi, '')}`;
} else {
volume = `${application.id}${volume.replace(/\//gi, '-').replace(/\./gi, '')}`;
}
}
predefinedVolumes.push({ id: volume, path: target, predefined: true });
} }
if (!target) { if (typeof volumeName === 'object') {
target = volume; let { source, target } = volumeName;
volume = `${application.id}${volume.replace(/\//gi, '-').replace(/\./gi, '')}`; if (
} else { source.startsWith('.') ||
volume = `${application.id}${volume.replace(/\//gi, '-').replace(/\./gi, '')}`; source.startsWith('..') ||
source.startsWith('/') ||
source.startsWith('~') ||
source.startsWith('$PWD')
) {
source = source.replace(/^\./, `~`).replace(/^\.\./, '~').replace(/^\$PWD/, '~');
} else {
if (!target) {
target = source;
source = `${application.id}${source.replace(/\//gi, '-').replace(/\./gi, '')}`;
} else {
source = `${application.id}${source.replace(/\//gi, '-').replace(/\./gi, '')}`;
}
}
predefinedVolumes.push({ id: source, path: target, predefined: true });
} }
predefinedVolumes.push({ id: volume, path: target, predefined: true });
} }
} }
} }
@@ -88,14 +117,14 @@
{/key} {/key}
{/each} {/each}
{#if $appSession.isAdmin} {#if $appSession.isAdmin}
<div class:pt-10={predefinedVolumes.length > 0}> <div class:pt-10={predefinedVolumes.length > 0}>
Add New Volume <Explainer Add New Volume <Explainer
position="dropdown-bottom" position="dropdown-bottom"
explanation={$t('application.storage.persistent_storage_explainer')} explanation={$t('application.storage.persistent_storage_explainer')}
/> />
</div> </div>
<Storage on:refresh={refreshStorage} isNew /> <Storage on:refresh={refreshStorage} isNew />
{/if} {/if}
</div> </div>
</div> </div>

View File

@@ -49,23 +49,23 @@
databaseDbUser = ''; databaseDbUser = '';
} }
} }
function generateUrl() { function ipAddress() {
const ipAddress = () => { if ($status.database.isPublic) {
if ($status.database.isPublic) { if (database.destinationDocker.remoteEngine) {
if (database.destinationDocker.remoteEngine) { return database.destinationDocker.remoteIpAddress;
return database.destinationDocker.remoteIpAddress;
}
if ($appSession.ipv6) {
return $appSession.ipv6;
}
if ($appSession.ipv4) {
return $appSession.ipv4;
}
return '<Cannot determine public IP address>';
} else {
return database.id;
} }
}; if ($appSession.ipv6) {
return $appSession.ipv6;
}
if ($appSession.ipv4) {
return $appSession.ipv4;
}
return '<Cannot determine public IP address>';
} else {
return database.id;
}
}
function generateUrl() {
const user = () => { const user = () => {
if (databaseDbUser) { if (databaseDbUser) {
return databaseDbUser + ':'; return databaseDbUser + ':';
@@ -183,16 +183,38 @@
class:cursor-pointer={!$status.database.isRunning} class:cursor-pointer={!$status.database.isRunning}
/></a /></a
> >
<label for="host">{$t('forms.host')}</label> {#if $status.database.isPublic}
<CopyPasswordField <label for="internalHost">Internal Host</label>
placeholder={$t('forms.generated_automatically_after_start')} <CopyPasswordField
isPasswordField={false} isPasswordField={false}
readonly readonly
disabled disabled
id="host" id="internalHost"
name="host" name="internalHost"
value={database.id} value={database.id}
/> />
<label for="host">Public Host</label>
<CopyPasswordField
placeholder={$t('forms.generated_automatically_after_start')}
isPasswordField={false}
readonly
disabled
id="host"
name="host"
value={loading.public ? 'Loading...' : ipAddress()}
/>
{:else}
<label for="internalHost">Host</label>
<CopyPasswordField
isPasswordField={false}
readonly
disabled
id="internalHost"
name="internalHost"
value={database.id}
/>
{/if}
<label for="publicPort">{$t('forms.port')}</label> <label for="publicPort">{$t('forms.port')}</label>
<CopyPasswordField <CopyPasswordField
placeholder={$t('database.generated_automatically_after_set_to_public')} placeholder={$t('database.generated_automatically_after_set_to_public')}

View File

@@ -86,7 +86,7 @@
readonly readonly
class="w-full" class="w-full"
value={`${ value={`${
services.find((s) => s.id === storage.containerId).name || storage.containerId services.find((s) => s.id === storage.containerId)?.name || storage.containerId
}`} }`}
/> />
</div> </div>
@@ -111,19 +111,18 @@
name="containerId" name="containerId"
class="w-full lg:w-64" class="w-full lg:w-64"
disabled={storage.predefined} disabled={storage.predefined}
readonly={storage.predefined}
bind:value={storage.containerId} bind:value={storage.containerId}
> >
{#if services.length === 1} {#if services.length === 1}
{#if services[0].name} {#if services[0].name}
<option selected value={services[0].id}>{services[0].name}</option> <option selected value={services[0].id}>{services[0]?.name}</option>
{:else} {:else}
<option selected value={services[0]}>{services[0]}</option> <option selected value={services[0]}>{services[0]}</option>
{/if} {/if}
{:else} {:else}
{#each services as service} {#each services as service}
{#if service.name} {#if service.name}
<option value={service.id}>{service.name}</option> <option value={service.id}>{service?.name}</option>
{:else} {:else}
<option value={service}>{service}</option> <option value={service}>{service}</option>
{/if} {/if}
@@ -157,7 +156,7 @@
disabled disabled
readonly readonly
class="w-full" class="w-full"
value={`${services.find((s) => s.id === storage.containerId).name || storage.containerId}`} value={`${services.find((s) => s.id === storage.containerId)?.name || storage.containerId}`}
/> />
<input disabled readonly class="w-full" value={`${storage.volumeName}:${storage.path}`} /> <input disabled readonly class="w-full" value={`${storage.volumeName}:${storage.path}`} />
<button <button

View File

@@ -34,7 +34,7 @@ services:
networks: networks:
- coolify-infra - coolify-infra
fluent-bit: fluent-bit:
image: coollabsio/coolify-fluent-bit:1.0.0 image: ghcr.io/coollabsio/fluent-bit:1.0.0
command: /fluent-bit/bin/fluent-bit -c /fluent-bit/etc/fluent-bit-dev.conf command: /fluent-bit/bin/fluent-bit -c /fluent-bit/etc/fluent-bit-dev.conf
container_name: coolify-fluentbit container_name: coolify-fluentbit
volumes: volumes:

View File

@@ -2,7 +2,7 @@ version: '3.8'
services: services:
coolify: coolify:
image: coollabsio/coolify:${TAG:-latest} image: ghcr.io/coollabsio/coolify:${TAG:-latest}
restart: always restart: always
container_name: coolify container_name: coolify
ports: ports:
@@ -23,7 +23,7 @@ services:
networks: networks:
- coolify-infra - coolify-infra
fluent-bit: fluent-bit:
image: coollabsio/coolify-fluent-bit:1.0.0 image: ghcr.io/coollabsio/fluent-bit:1.0.0
container_name: coolify-fluentbit container_name: coolify-fluentbit
volumes: volumes:
- 'coolify-logs:/app/logs' - 'coolify-logs:/app/logs'

View File

@@ -1,4 +0,0 @@
FROM fluent/fluent-bit:1.9.8
COPY ./fluent-bit.conf /fluent-bit/etc/fluent-bit.conf
COPY ./fluent-bit-dev.conf /fluent-bit/etc/fluent-bit-dev.conf
COPY ./parsers.conf /fluent-bit/etc/parsers.conf

View File

@@ -1,30 +0,0 @@
[SERVICE]
Parsers_file /fluent-bit/etc/parsers.conf
Flush 1
Grace 30
[INPUT]
Name http
Host 0.0.0.0
Port 24224
[FILTER]
Name parser
Match *
Key_Name log
Parser jsonparser
Reserve_Data True
[OUTPUT]
Name file
Match *
Path /logs
Mkdir true
Format csv
# [OUTPUT]
# Name influxdb
# match *
# Host coolify-influxdb
# Port 8086
# Database coolify
# Bucket coolify
# Org coolify
# HTTP_Token 12345678
# Sequence_Tag _seq

View File

@@ -1,30 +0,0 @@
[SERVICE]
Parsers_file /fluent-bit/etc/parsers.conf
Flush 1
Grace 30
[INPUT]
Name http
Host 0.0.0.0
Port 24224
[FILTER]
Name parser
Match *
Key_Name log
Parser jsonparser
Reserve_Data True
[OUTPUT]
Name file
Match *
Path /app/logs
Mkdir true
Format csv
# [OUTPUT]
# Name influxdb
# match *
# Host coolify-influxdb
# Port 8086
# Database coolify
# Bucket coolify
# Org coolify
# HTTP_Token 12345678
# Sequence_Tag _seq

View File

@@ -1,6 +0,0 @@
[PARSER]
Name jsonparser
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On

View File

@@ -1,12 +0,0 @@
FROM alpine:3.17
ARG BUILDARCH
ARG PB_VERSION=0.12.3
RUN apk add --no-cache \
unzip \
ca-certificates
ADD https://github.com/pocketbase/pocketbase/releases/download/v${PB_VERSION}/pocketbase_${PB_VERSION}_linux_${BUILDARCH}.zip /tmp/pb.zip
RUN unzip /tmp/pb.zip -d /app/
RUN rm /tmp/pb.zip
EXPOSE 8080
CMD ["/app/pocketbase", "serve", "--http=0.0.0.0:8080"]

View File

@@ -1,7 +1,7 @@
{ {
"name": "coolify", "name": "coolify",
"description": "An open-source & self-hostable Heroku / Netlify alternative.", "description": "An open-source & self-hostable Heroku / Netlify alternative.",
"version": "3.12.20", "version": "3.12.29",
"license": "Apache-2.0", "license": "Apache-2.0",
"repository": "github:coollabsio/coolify", "repository": "github:coollabsio/coolify",
"scripts": { "scripts": {
@@ -32,7 +32,7 @@
"build:api": "NODE_ENV=production pnpm run --filter api build", "build:api": "NODE_ENV=production pnpm run --filter api build",
"build:ui": "NODE_ENV=production pnpm run --filter ui build", "build:ui": "NODE_ENV=production pnpm run --filter ui build",
"dockerlogin": "echo $DOCKER_PASS | docker login --username=$DOCKER_USER --password-stdin", "dockerlogin": "echo $DOCKER_PASS | docker login --username=$DOCKER_USER --password-stdin",
"release:staging:amd": "docker build -t coollabsio/coolify:next . && docker push coollabsio/coolify:next", "release:staging:amd": "docker build -t ghcr.io/coollabsio/coolify:next . && docker push ghcr.io/coollabsio/coolify:next",
"release:local": "rm -fr ./local-serve && mkdir ./local-serve && pnpm build && cp -Rp apps/api/build/* ./local-serve && cp -Rp apps/ui/build/ ./local-serve/public && cp -Rp apps/api/prisma/ ./local-serve/prisma && cp -Rp apps/api/package.json ./local-serve && env | grep '^COOLIFY_' > ./local-serve/.env && cd ./local-serve && pnpm install . && pnpm start" "release:local": "rm -fr ./local-serve && mkdir ./local-serve && pnpm build && cp -Rp apps/api/build/* ./local-serve && cp -Rp apps/ui/build/ ./local-serve/public && cp -Rp apps/api/prisma/ ./local-serve/prisma && cp -Rp apps/api/package.json ./local-serve && env | grep '^COOLIFY_' > ./local-serve/.env && cd ./local-serve && pnpm install . && pnpm start"
}, },
"devDependencies": { "devDependencies": {