Ceph upstream released the first stable version of ‘Octopus’ today, and you can test it easily on Ubuntu with automatic upgrades to the final GA release. This version adds significant multi-site replication capabilities, important for large-scale redundancy and disaster recovery. Ceph v15.2.0 Octopus packages are built for Ubuntu 18.04 LTS, CentOS 7 and 8, Container image (based on CentOS 8) and Debian Buster.
What’s new in Ceph Octopus?
The Ceph Octopus release focuses on five different themes, which are multi-site usage, quality, performance, usability and ecosystem.
Scheduling of snapshots, snapshot pruning and periodic snapshot automation and sync to remote cluster for CephFS are all new features that allow Ceph multi-site replication . New snapshot-based mirroring mode for RBD is also part of Octopus release. These features help automate back-ups, save storage space and make it easy to share and protect the data from potential failures.
Simple health alerts are now raised for Ceph daemon crashes and can trigger mail send, reducing the need to deploy an external cluster monitoring infrastructure. New “device” telemetry channel for hard disk and SSD health metrics reporting improves the device failure prediction model. Users are able to opt-in if the telemetry content is expanded.
Recovery tail latency has been improved, as object sync is now available during recovery by copying only the object’s delta. BlueStore, the object store back-end, has received several improvements and performance updates, including improved accounting for “omap” (key/value) object data by pool, improved cache memory management, and a reduced allocation unit size for SSD devices.
The orchestrator API is now interfacing with a new orchestrator module, cephadm, that allows to manage Ceph daemon hosts, like a working container runtime, over ssh and explicit management commands. The Ceph dashboard is now also integrated with the orchestrator. Additionally, users can now mute health alerts, temporarily or permanently.
The ceph-csi now supports RWO and RWX via RBD and CephFS. Also, integration with Rook provides a turn-key ceph-csi (container storage interface) by default. This interface includes RBD mirroring, RGW multi-site and will eventually include CephFS mirroring too.
Octopus available for testing on Ubuntu
Try Ceph Octopus now on Ubuntu to combine the benefits of a proven storage technology solution with a secure and reliable operating system. You can install the Ceph Octopus Beta from the OpenStack Ussuri Ubuntu Cloud Archive for Ubuntu 18.04 LTS or using the development version of Ubuntu 20.04 LTS (Focal Fossa).
Canonical supports all Ceph releases as part of the Ubuntu Advantage for Infrastructure enterprise support offering. Ceph Octopus charms will be released alongside Canonical’s Openstack Ussuri release on May 20th 2020. These will allow users to automate Ceph Octopus deployments and day-2 operations, using Juju, the application modeling tool.