I just ran the debian 11 install script on on a vps in Vultr like I have been doing other days, but after the script finishes the mariadb service is not working at all
Jun 14 06:01:41 systemd[1]: Starting MariaDB 10.5.23 database server...
Jun 14 06:01:41 mariadbd[4292]: 2024-06-14 6:01:41 0 [Note] Starting MariaDB 10.5.23-MariaDB-0+deb11u1 source revision 6cfd2ba397b0ca689d8ff1bdb9fc4a4dc516a5eb as proces>
Jun 14 06:01:41 mariadbd[4292]: 2024-06-14 6:01:41 0 [ERROR] InnoDB: innodb_page_size=16384 requires innodb_buffer_pool_size >= 5MiB current 2MiB
Jun 14 06:01:41 mariadbd[4292]: 2024-06-14 6:01:41 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Jun 14 06:01:41 mariadbd[4292]: 2024-06-14 6:01:41 0 [Note] Plugin 'FEEDBACK' is disabled.
Jun 14 06:01:41 mariadbd[4292]: 2024-06-14 6:01:41 0 [ERROR] Unknown/unsupported storage engine: InnoDB
Jun 14 06:01:41 mariadbd[4292]: 2024-06-14 6:01:41 0 [ERROR] Aborting
Jun 14 06:01:41 systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jun 14 06:01:41 systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jun 14 06:01:41 systemd[1]: Failed to start MariaDB 10.5.23 database server.
Any idea what could be causing this error, I have upgrade multiple PBXs to v4 in Vultr without seeing this before.
Modifying the value for the innodb_buffer_pool_size from to proper value other than 0GB allows the Mariadb service to start. I have set the value back to 512M as the vps is small.
So I added sed -i 's/^innodb_buffer_pool_size=0G/innodb_buffer_pool_size=512M/' to the install script before the mariadb.service restart line and was able to complete the install fully again.
The vm is using 1 core and 2GB and couldn’t get the install to finish. this is the first time it has happened. Until i set the value innodb_buffer_pool_size to something other than the 0G it was set to.
The day before this error started happening I deploy a client on a 1core 1Gb ram vps, without a problem. That is why I opened this ticket something was off, as the vps was getting out of memory issues as well which hadn’t happened before.
Script install works, I have used it multiple times. The issue was pointed above the innodb_buffer_pool_size value was set wrongly, but it has been fixed. I have deployed 2 vms today via the script in vultr without problems.
No the issue was the innodb_buffer_pool_size value in the /etc/mysql/conf.d/vitalpbx.cnf file, after manually modifying it yesterday, that I was able to finalize the install.
Today the value has been changed back to 128M by the vital devs as that is what I am seeing in the two vms I provisioned.
I forgot to update the post yesterday but I experienced the issue again, with another vm with 1 core and 2GB of ram. On my last update, I deployed 2 vms with 1core 1gb of ram that day, both of those ran the script without trouble and had the /etc/mysql/conf.d/vitalpbx.cnf like this
The innodb_buffer_pool_size value by default was to 128M which made me think you guys had made changes, but yesterday with the 2GB of RAM vm which experienced the issue again, the file /etc/mysql/conf.d/vitalpbx.cnf was like this again:
The setting innodb_buffer_pool_size was once again set to 0G and causing the full install to fail on me as mariadb had errors running properly while the script was finishing up.
Is there some sort of logic when the /etc/mysql/conf.d/vitalpbx.cnf is copied over or why do you think there is a difference @miguel between the files with the RAM sizes of vms. Dont think this is a Vultr error considering the file is a vitalpbx one, the script doens’t log anything different during the process between vms but the file contents are certainly different between the two.
I installed VitalPBX v4 on a new server using 2 vCPU and 2GB RAM, and I’m experiencing the same issue: the innodb_buffer_pool_size keeps changing to 0G, and it looks like every time I run vitalpbx optimize-mariadb it’s modifying the file. This issue is only on the latest version of VitalPBX.