Cycle is Online.
As of , the platform is reporting healthy and all services are available.
- Console
- Portal
- Compute Proxy
- DNS
- Auth
- Nexus
- Public API
- Manager
- Factory
- Monitor
Active Announcements
- No Active Announcements
Past Announcements
Unexpected Compute Restarts During Update 2025.12.15.2
ResolvedEarlier today, following a routine platform update, we received a small number of reports of short application interruptions (typically 2–15 minutes) and unexpected reductions in RAM usage across some client compute nodes. After investigation, we identified that this issue only affected servers running OS versions released prior to August 2025. Newer CycleOS versions (August 2025 and later) were not impacted. The root cause was a missing linked Ceph library dependency recently required by our compute service. During the update process, this dependency was not supported by a soft update. When today’s update deployed, the compute service on affected servers was unable to restart, which triggered our circuit-breaker safety mechanism. As designed, the circuit breaker escalated to a full restart of the compute spawner, resulting in a full recovery on those servers. We're reviewing internal processes to better catch the specific cause of this issue sooner, but no further impact to customers is expected. If you have any questions or notice anything unusual, our team is here to help.
Changelog
- 2025.12.15.2
New Portal Navigation, Automated Attached Storage Discovery, and Improved Metrics
In this release, users will enjoy a rebuilt portal navigation experience that aims to bring enhancements to filtering and search as well as major improvements to how portal handles load balancer metrics. We've also included the ability to scan for existing volumes on their Ceph or iSCI integrations, making volume discovery much simpler. Alongside these new additions and improvements comes a handful of platform level optimizations and fixes.
- added
Portal Navigation
Rebuilt portal navigation to support better filtering, search, and navigation via hotkeys.
- added
External Volume Scanning
Cycle can now scan for existing external volumes on a Ceph / iSCSI-backed storage cluster.
- added
Large Stack Builds
Stack builds that utilize large amount of RAM can now utilize the 'use_disk' flag for larger builds.
- security
Improved VirtualFS for SFTP
For container volumes that have SFTP enabled, we've rebuilt the virtualfs implementation to better handle symlinks and relative paths.
- improvement
Load Balancer Metrics in Portal
Significantly refactored how the portal handles filtering and time ranges for load balancer metrics to give better perspective of recent traffic.
- improvement
Quick Disconnects in LB
The Cycle Native LB no longer will throw errors on a connection that 'early-disconnects' during a TLS Handshake. Previously, this would make Cloudflare TCP healthchecks appear as errors, during certain configurations.
- improvement
Support for i226 NICS
CycleOS now boots with Intel i226 NICs.
- fixed
Sticky Bit in Image Builds
For images from DockerHub, Cycle's image builder wouldn't persist a directory's sticky bit if it was added by an upper layer.
- improvement
Reserved Resources for Services
For servers with more than 8GB of RAM, we've bumped the reserved RAM for the compute service from 256MB to 384MB.
- improvement
Windows Flavor VM Utility Scripts
When deploying a VM where the flavor is set to 'Windows', we now mount an ISO containing a powershell script that can automatically configure networking in Windows to work with Cycle's networking.
- 2025.11.25.1
Better VM Control, Improved Volumes Management, and Platform Fixes
This release adds smarter volume management and more resilient load balancer probe handling. It introduces automatic job-queue retries and includes fixes for RBD initialization and IPv6 auto-extend behavior. Gateway instances are now removed automatically when no longer needed, and VM resource limits can be set arbitrarily in the portal.
- fixed
ISO Attachments for VMs
ISO downloads, for VM attachments, now work within environments where the discovery service is running at another provider.
- improvement
Clean Orphaned External Volumes
Cycle will now auto-clean orphaned volumes, as long as 'Delete' is passed during volume delete.
- improvement
Handle Early HTTP Client-Exits (Probes) Within the LB
If an HTTP client exits, prior to TLS negotiation, the LB no longer considers the request an initialization timeout.
- improvement
Auto-Retry for Job Queuing
If a job is being queued during an election, the API will auto-retry every 3 seconds, for 3 attempts.
- fixed
IP Pool Auto-Extend
Fixed invalid gateway CIDR for IPv6 ranges that are auto-extended as usage
- fixed
RBD Kernel Module
Automatically initialize the RBD kernel module for Ceph if needed, and not already initialized
- improvement
Auto Removal of Gateway Instances
When a server no longer needs gateway instances, they're now automatically removed.
- improvement
Virtual Machine Resource Limits
Virtual machines can now be set to arbitrary resource limits in the portal.
- 2025.11.12.1
Bring Your Own Storage: Mount SANs, Ceph Pools, and More to Containers & VMs
Full external volume support for containers and VMs has just landed in this update. Cycle now lets you plug in SAN or Ceph storage as easily as local disks, bringing flexibility to how you manage persistent data. Windows VMs are now supported as well, bringing more variety to the workloads you can run.
- added
External Volumes - A New Cycle Primitive
Cycle now supports connecting storage backends such as SAN or Ceph clusters, allowing you to create block or filesystem volumes that can be mounted into containers and VMs.
- added
Windows VMs Now Supported
Windows VMs are now supported on Cycle. We've added the ability to specify what the VMs host OS is, and Cycle will make some adjustments to the VM to facilitate it. For windows, this means mounting the virtio drivers to support the virtualized hardware, including disk and network.
- improvement
Ability to Customize VM CPU Model, Machine Type, and More
It's now possible to customize the CPU model and CPU feature set that is exposed to a VM. This improves compatibility with a variety of different operating systems, including Windows.
- improvement
VM 'configuring' State
A new VM state, 'configuring', was introduced. This state indicates that the VM hypervisor is preparing the operating system, and isn't yet running.
- improvement
VPN Service Customization
You can now add additional directives to the VPN service container. New directives will be appended to the config on start.
- improvement
Optimized Traffic Routing for VMs via Gateway Service
We've reconfigured some SNAT/DNAT routing rules within the gateway service to remove an extra hop that previously affected virtual machines.
- added
'Use Disk' Option For Larger Image Builds
The new 'use disk' option tells Cycle to use the hard disk instead of RAM when generating a container image during a build. This is especially useful for extremely large images, which may not fit into RAM and would otherwise error out when building.
- improvement
Dozens and Dozens of Portal Improvements
We've made a ton of improvements to the Portal, fixing UI quirks, issues, and flows. Everything from improved TLS certificate messaging with wildcards and user certificates to dark mode improvements.
- 2025.09.26.1
Improved Volume Capabilities and Host Device Injection
We've made it easier than ever to manage your most demanding workloads. SAN volumes can now be detached and re-attached across resources, while device injection adds new hardware flexibility. Also in this release is an easier-to-manage virtual provider experience, more customizable service logging, and more granular control over API and pipeline key IP restrictions.
- added
Host Device Injection
Host devices can now be exposed directly to a container via the container config, adding support for more sophisticated applications which might require encryption devices, USB mounts, etc.
- improvement
SAN Volume Modification
VM/Containers that are SAN mounted can now be modified after creation, enabling a user to detach a LUN from one VM/container and attach it to another.
- improvement
SAN ISCSI Initiator Name
Cycle now generates unique initiator names on startup for all SAN-enabled servers.
- improvement
Virtual Provider Modal Refactor
The virtual provider modal has been rebuilt to utilize the 'resource modal' enabling easier management of Virtual Provider integrations, especially those with dozens of ISOs.
- added
Environment Service Log Draining
Services can now be excluded from the log drain.
- fixed
API Key / Pipeline Keys IP Modification
IP restrictions can now be removed entirely for keys that previously had keys attached.
- 2025.09.03.1
SAN (Beta), Faster Migrations, Smarter Deployments, and More
We’re excited to introduce Storage Area Network (SAN) support (beta), making it easier to store large amounts of data reliably for both containers and virtual machines. This update also brings more flexible IP pool options, much faster migration speeds (up to 80% quicker), and smarter deployment strategies for edge, distribution, and HA setups. On top of that, we’ve fixed issues with instance migration and public IP filtering to make your environments more stable and dependable.
- added
SAN Support (BETA)
Servers, deployed via Virtual Providers, can now utilize SANs enabling large, persistent, data storage for both containers and virtual machines.
- improvement
Discovery Via TCP
The discovery service will now default to TCP resolution when resolving external DNS domains, enabling larger DNS payloads.
- improvement
Distribution, Edge, and HA Deployment Strategies
These orchestrators have been completely rebuilt to better account for existing instances, yielding a more calculated deployment of new instances.
- fixed
Instance Migration Race Condition
When moving a stateful container that utilizes a large (>1GB) volume across cloud providers, a timeout could occur which would trigger another migration attempt before the first was complete.
- improvement
LACP Bonds on Virtual Provider
The previous way we handled LACP negotiation didn't work for all NIC vendors, our new process supports a much larger range of NICs, if not all.
- improvement
Migration Speed
We've increased the block size for streaming migrations, decreasing the time for migrations by as much as 80% -- depending on link speed.
- added
IP Pool Options
Self-managed IP pools, added via Virtual Providers, can now be customized further to enable features like ARP proxying.
- fixed
Public IP Filtering
The traffic rule introduced to prevent two publicly accessible containers that existed within different environments from communicating was overly strict and, in rare cases, for single-node setups, blocked traffic that should have been valid. This has been fixed.
- 2025.07.24.1
IPv6 Support for Virtual Providers, Serial Console Access, and a SDN API Change
This update enhances platform flexibility with IPv6 support for Virtual Providers and improves overall system stability with critical fixes. It also refines API behavior to align with broader update patterns for a more consistent experience.
- improvement
Dual Stack Virtual Providers
Added the ability to configure IPv6 for virtual provider servers via the portal / API.
- fixed
Environment Service Updates
Previously, automatic updates could occasionally break the ability to SSH into a service container.
- changed
SDN L2 API
The updating of an L2 configuration has been moved from a PATCH to a reconfiguration job task to align with how other update calls are implemented in the API. This is a breaking change, but the base functionality was introduced in the last build.
- fixed
Cycle Kernel with UEFI Console
A recent kernel build broke console output on some bare metal models, that is now fixed.
- 2025.07.15.1
Incremental Improvements to VM Provisioning, Deployments, and Networking
This incremental release brings improvements and fixes to virtual machine provisioning, container deployments, and networking.
- fixed
Volume Provision for VMs
A race condition existed where an empty base disk could've been mounted as a cd-rom during VM provisioning, this has been fixed.
- changed
Container Deployment Strategy Updates
The platform now allows for a container's deployment strategy to be changed after creation IF it was never specified
- improvement
Firewall NAT Rule
We've changed the rule that prevented egress containers from talking to each other over their NAT IP. Previously, this blocked any egress traffic to any 10.x address, but now we're specifically only limiting traffic from 10.10.x.x to another 10.10.x.x (NAT egress).
- improvement
Traffic Drain Notifications
We now push traffic drain notifications down the internal api's notification pipeline, enabling containers to listen for drain events for other containers.
- added
Per-Deployment Stack Variables
Via the 'Deploy Stack' step within pipelines, users can now define variables that will override a stack's build-time variables at the time of deployment. This is beneficial for teams looking to deploy a single stack to multiple environments, but with unique variables per environment.
- added
Bare Metal H100 + A100 @ Vultr
The platform now supports provisioning of A100/H100 bare metal at Vultr.
- fixed
Container Deprecate Restart
Previously, marking a container as deprecated would cause the container to restart. This issue has been resolved.
- 2025.06.25.1
ALIAS DNS Support and New Internal API IP Metas
This release brings additional metadata functionality to internal API and expands DNS capabilities with ALIAS record support. We've also fixed an edge case in load balancer configurations applied through stacks where submitting a null config could cause issues.
- added
Internal API / SDN IPs
The internal API now supports ?meta=sdn_pool_ips for containers that belong to SDN networks which utilize their own IP pools.
- added
Support for ALIAS DNS Records
ALIAS records, which are similar to CNAMEs but utilized for zone origins, are now supported.
- fixed
Load Balancer Config within Stacks
If a stack specified a load balancer service but omitted a config, the load balancer's config would be reset on each subsequent deployment. Now, an empty config will no longer reset a previously deployed load balancer.
- 2025.06.09.2
Better IP Handling, More SDN Envs, LB Fixes
This release features a set of improvements, fixes, and additions focused on smarter IP handling, easier network scaling, and more reliable load balancer behavior. These changes continue our march toward simplifying operations on the platform while we work toward more monitoring and observability changes in the coming month.
- improvement
SDN IP Migrate
Container instances that are deployed to virtual provider servers now retain their SDN static pool'd IPs as long as the migration is within the same region.
- improvement
SDN L3 Max Environment Increase
The maximum number of environments that can be added to a layer 3 SDN network has been raised to 15.
- improvement
Allowed Metrics Increased
Each tier of monitoring now has an increased number of total metrics included in the tier package.
- fixed
Ability to Initialize LB without IPs
There was a bug that would cause load balancers on virtual provider nodes to not properly initialize when started without IP's. This issue has been resolved. This is valuable for users setting up environments which will exist exclusively on private networks, or behind a Cloudflare Tunnel.
- fixed
Raw Stack Patch
There was an issue with patching raw stacks via API where the platform would not properly handle variables defined in the stack. This issue has been resolved.
- added
Load Balancer Micro Cache
A 5 second cache has been added to the load balancer that retains information used to decide destination prioritization, greatly reducing the overall pressure on the load balancer during times of increased traffic.
- improvement
Image Source Delete
Image sources with images that are not being used by any container can now be deleted without first needing to delete every image from the source.
- 2025.06.03.4
Faster VMs, Smarter Scaling, and New Tools
In this update, users will see double the write speed performance on all virtual machines and can now utilize a VNC connection for VM interaction! We're also proud to announce that auto-scaling has moved out of beta and now supports a custom webhook for even more granular controls. The portal was improved with new charts, better container logs, and better storage visibility.
- improvement
Faster VM Writes
We’ve made major improvements to VM storage performance resulting in a doubling of write speeds.
- fixed
VPN Access to VMs
Fixed an issue where VMs weren’t reachable over the VPN. They now route correctly.
- improvement
Auto-Scaling Out of Beta
Auto-scaling has been stable for a while and we've recently made major improvements to performance, reliability, and responsiveness. The beta tag has been removed.
- added
Custom Webhooks for Auto-Scaling
You can now trigger custom webhooks when scale events occur, giving users full control over scaling logic.
- added
VNC for VMs
Virtual machines can now utilize a VNC connection for enhanced interaction.
- added
Compute Storage Summary
Added a detailed storage breakdown on the server view. Useful for debugging disk issues and tracking down container file sprawl.
- changed
Log Drain Moved
Log drain config is now applied at the environment level instead of per container.
- improvement
Improved Container Logs
Logs now show up with syntax highlighting and color-coded formatting, making them easier to read at a glance.
- improvement
DNS Lookups Chart
The DNS lookups chart will now show deeper information on cached and throttled hits including success, fail, and not found data points.
- 2025.05.21.2
Improved Networking UX, Easier Billing, and DNS Fixes
After our last update, this is a small quality of life improvements patch. It's mainly focused on improvements to billing access, container networking visibility, and has a nice fix for custom DNS resolvers.
- fixed
Custom Resolvers
A bug was uncovered that would cause custom resolvers to only work with CNAME records. This has been resolved.
- added
Invoice Downloads
Users can now download invoices directly from billing emails, forgoing the previous requirement of logging into the portal for the download.
- added
Attached Networks
The container instances page now shows all attached networks for a given instance in one succinct view, making it easier to quickly view network details.
- improvement
VPN Configs Over IPv6
The platform now supports downloading VPN configuration files through load balancers that have only IPv6 enabled.
- 2025.05.15.1
Our Biggest Platform Release in Years: Virtual Providers and Virtual Machines
This release marks a new era of hybrid infrastructure orchestration and cements the platform's status as a true alternative to both Kubernetes and VMware. It is easily the biggest release in years for our organization, and we couldn't be more excited to get it into the hands of our users! The biggest piece of this major release is the capability to now run any kind of workload anywhere -- while still maintaining the efficiency, standardization, and automation that the platform brings. We can't wait to see what you're able to build.
- added
Virtual Providers
Virtual providers makes it simple to add any x86-compatible (Intel, AMD, etc) infrastructure to your Cycle clusters, unlocking the full potential of bare metal and massively reducing the technical lift for on-prem, colo, or non-native bare metal cloud offerings.
- added
Virtual Machines
For workloads that don't play nicely with containers, we now support running virtual machines alongside your containerized workloads in environments. Great for legacy apps, maintaining hybrid stacks, and even running a full on OS inside the environment.
- added
Deployment-Restricted Scoped Variables
Scoped variables now support being scoped to deployments. This gives users ultimate flexibility when it comes to certain scoped variables changing per deployment without the headache of making super dynamic on the fly changes to scoped variables within the environment.
- added
'Fixed' Destination Prioritization for LB V1
The V1 load balancer now supports fixed destination prioritization. This feature will be mostly used alongside source IP routing to further anchor that the same requesting IP will be routed to the same container instance.
- added
Server IP Pools (Virtual Provider)
Users can now add IP's to virtual provider servers so that containers deployed to them with an L2 network can allocate their own IP's.
- added
L2 Networks (SDN)
The platform now supports Layer 2 software-defined networking via its Networks primitive. This enables L2 connectivity across your infrastructure for more advanced networking needs.
- added
L2 domains
Containers on Cycle can now connect directly to Layer 2 networks, not just at the environment level. This allows for tighter control over how workloads interact with external infrastructure or broadcast domains.
- added
Expose Host's Cgroups
Users can now choose to expose the underlying host's Cgroups to a container. This aids in building things such as monitoring functionality.
- added
Expose Power API
Users can now give a container the ability to shut down a server via the internal API through opting into the expose power API.
- improvement
Upgraded to Linux Kernel 6.6.17
The Linux kernel used by CycleOS has been upgraded to 6.6.17.
- added
Log Volume
Each server now mounts a 10GB hard capped log volume. This guards against disk pressure caused by containers with uncontrolled log output from filling the servers disk entirely. Once disk usage for this volume hits 90% log retention is reduced from 72 to 48 hours.
- 2025.04.24.2
Traffic Draining, Source IP Routing, and Tons of Improvements
An exciting release as we move into the end of April and prepare for an awesome summer of updates. Users can now mark instances to drain traffic, signaling the platform to stop routing new connections to them while existing sessions wind down safely. The V1 load balancer gets some nice flexibility improvements and servers now support nicknaming. New graphs for server telemetry have been added and container instance network telemetry graphs fixed. This release marks the beginning of an impressive schedule of releases we have moving into summer so keep your eyes peeled for changelog updates!
- improvement
Source IP Routing
V1 load balancer routers now support source IP routing mode. This allows for more consistent and predictable routing to instances that require more durable sessions.
- added
Server Network Telemetry Graph
A new server telemetry graph has been added to the portal that shows transmit and receive bytes for individual nodes.
- added
Traffic Draining
Container instances can now be marked for traffic draining, informing the platform that traffic should no longer be sent to that instance. For load balancers, the platform will stop traffic to that load balancer making it safe to remove, restart, or reconfigure.
- fixed
Container Telemetry Transmit and Receive
Container instance network telemetry data had an issue where transmit and receive data was flipped. This has been fixed and now shows correctly.
- improvement
SFTP Lockdowns
As always, SFTP on any server will go into lockdown after a spike in failed login attempts. Users who were successfully authorized prior to the lockdown can now continue their session uninterrupted.
- added
Server Nicknames
All servers now support adding a nickname, making it simpler to track individual servers in a cluster and hub.
- added
Restart Container
A button has been added for restarting containers. For containers with multiple instances, the restart stagger will also be automatically applied.
- improvement
Load Balancer IP Summary
Load balancer IP's on the environment summary now show the exact assigned IP instead of the associated CIDR from which an IP is assigned.
- improvement
Retry Image Downlaods
The compute service now tries multiple times to download container images from factory if there is an interruption.
- improvement
UID/GID for Scoped Variables
Added support for specifying permissions and UID/GID for injected scoped variable files.
- added
Console Buffer Increased
Increased the console buffer on containers making more room for logging during times where the compute services is updating or restarting.
- 2025.03.19.3
Private Load Balancing, New Pipeline Steps, and Advanced Sysctl Commands
This update brings a focus on flexibility in environments and pipelines. Users will enjoy a new pipeline step (deprecate container) and also new ways to use named resource identifiers. Load balancers can now be run without a public IP's assigned to them opening the door to more dynamic, zero-trust architectures. In the API, filtering got an upgrade with the addition of filtering on deprecated tag for containers. Finally, users who need to take deeper control of IPv6 settings can use the disable_ipv6 for further granularity in networking control on containers.
- added
Private Load Balancers
Load balancers can now be enabled without public IPs. This is valuable for load-balancing private applications within an environment that might not need public internet access -- i.e. Cloudflare tunnels.
- improvement
Named Arguments in Resource Identifiers
We now support arguments like deployment.version and deployment.tag as parameters to a resource identifier in pipelines. With these arguments, teams can build significantly more flexible pipelines furthering automation efforts.
- added
Deprecate Containers Step
Containers can now be deprecated via pipelines.
- improvement
Jobs Endpoint
The jobs endpoint wasn't properly limited to the expected capability for API keys.
- improvement
Filtering via Deprecation State
In the API, containers can now be filtered by their deprecation state using ?filter[deprecated]=true/false
- added
Disable IPv6
While we don't recommend disabling IPv6, there may be a specific reason where it is required. By setting net.ipv6.conf.all.disable_ip6 to 1, Cycle now fully disables IPv6 for a container.
- 2025.02.20.1
Improved Load Balancer Routing, Upgraded TLS, and Administrative Flexibility
This release brings users a handful of solid improvements and a couple needed fixes. The V1 load balancer routers got a fix to path matching in routes and also a major improvement to the predictability of the router chosen. Now the first match will always win. On the security front, a now deprecated cryptography alorithm was removed and the platform now enforces a higher minimum TLS version. Finally, hub billing now support multiple billing contacts, expanding flexibility on who in an organization receives important emails.
- improvement
Integration Deprecations
Hub integrations can now be deprecated. This demarkation has no effect on existing integrations but will prevent new additions of that integration from being added.
- improvement
SSH TLS Update
Removed a now deprecated crypto algorithm and now enforced a minimum TLS version of 1.3
- fixed
Route Matching Predictability
We've refactored how the LB makes routing decisions to eliminate a race condition that existed with path matching. Now, router matching is significantly faster and more predictable, with the first (top) match always winning.
- improvement
Billing Contact
Organizations can now update their billing and tax information within the portal. Additionally, organizations can subscribe additional email addresses to invoice notifications.
- improvement
User Uploaded Wildcard TLS Certificates
We've expanded the functionality of user uploaded certificates to work with wildcard certificates. While we've supported LetsEncrypt wildcards for years, our recently added 'user uploaded certificates' did not support wildcards until now.
- 2025.01.21.2
New Load Balancer Metrics, Log Drain Format, and Solid Platform Fixes
This update features a slew of more granular load balancer metrics for Cycle's V1 load balancer. These new metrics also come with additional tooling in the portal that allows users to make specific filters when debugging network traffic. The log drain format can now be customized, offering higher flexibility for integrations with existing services, and the platform received some great fixes that should lead to even more stability.
- improvement
Log Drain Format
The format of log output can now be customized via the container config integration.
- added
Load Balancer Metrics
The V1 load balancer now collects more granular metrics that can be helpful in diagnosing application issues. A restart of the load balancer is required to gain these additional metrics. Additionally, users can now filter load balancer metrics based on domains and HTTP response codes in the load balancer's URLs tab.
- fixed
File Ownership
We found a bug during our OCI image merging where, under certain conditions, files could lose their respective user/group ownership. This is fixed for all future image builds.
- fixed
Instance Migration
No longer allow an instance to be migrated if an existing migration is already occurring for that instance.
- fixed
Migration Deadlock
If an instance failed to migrate after 16 attempts, it could cause a node deadlock preventing future actions on that server.
- 2025.01.14.3
Stack Build Logs, Stability Improvements, and Better Auto-Scaling
In this release, users will find a wonderful new feature in Stack build logs . These logs will give insight into debugging stack builds that, in the past, have been more cumbersome to unpack. Alongside the stack build logs are a slew of stability improvements including a new agent logging mechanism that will enable even more resiliency to each server node during times of high usage. Additionally, auto-scaling was improved, requiring fewer window intervals before a scaling event can happen, resulting in an even more responsive auto-scaling from the platform.
- improvement
Metric Label Standardization
We noticed a few inconsistencies in the naming conventions for metric/event labels. While they're now fixed, certain graphs within the portal will take a little time to populate with new data.
- added
Stack Build Logs
Stacks now generate build logs that detail the overall build of images, stack parsing, etc, making it significantly easier to debug variable/stack formatting issues.
- improvement
External Log Draining
Although this was beta released in December's build, we've made a number of optimizations to provide more context (via HTTP headers) while also reducing the amount of redundant meta information in the POST body.
- fixed
Container Reimage / Deleted Instances
If a container was reimaged immediately after a scale event, any deleted instances would be undeleted.
- added
Block Storage / Raw Volumes
While not accessible yet, we've added the ability to create volumes as raw block devices on compute nodes -- preparing for some soon to be announced features.
- added
Migrating Block Storage Volumes
Support the ability to move block storage volumes between compute nodes.
- improvement
GCP Generation 3 + 4 Models
We've compiled the required kernel modules into CycleOS to support the generation 3 and 4 (c3, c4, n3, m3) and accelerator (GPU) GCP machine types.
- improvement
Autoscaling Threshold Calculation
We rebuilt the logic for network / bandwidth scaling thresholds to enable more responsive scaling. Previously, network scaling events required two interval windows to pass before a scaling event could occur.
- added
CycleOS Build Version
Servers now report their CycleOS build version during their check-ins. This version is also displayed in the portal.
- added
Agent/Logging Volume
Following a restart of the server, CycleOS will now build a dedicated 2GB volume for storing logs. By moving logs to their own volume, nodes will no longer deadlock / become unresponsive if disk usage reaches 100%.
- improvement
Hub Delete Prevention
Hubs with active servers can no longer be deleted, unless the 'force' flag is specified.
- fixed
Attached Storage Size Customization
When deploying servers at AWS or GCP, users can customize the size of the underlying block device. This functionality was broken in December's release.
- improvement
Cycle IPs in LB WAF
Previously, it was possible to accidentally block VPN configuration via the portal by applying restrictive rules to the WAF (web application firewall). Now, the WAF will automatically detect the necessary Cycle IPs to allow VPN configuration without organizations needing to manage the IP list themselves.