aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorYannik Sander2021-03-30 17:32:58 +0200
committerYannik Sander2021-06-22 13:41:15 +0200
commit1a2d35be27de412bd2c406ed01189dc93ae0985a (patch)
tree93dcc7af9e4e8e62ba73d4da5429e77c81f3572c
parent70d71b3027b1793b780f1e2435bdbbe1b0cb9ac6 (diff)
Add multi node support
Run multiple deployments in sequence Resolve targets later Extend context by deployed flake Apply clippy suggestions Add revoke command builder Track succeeded deploys Add revoke function Register revoke error as deploy error Prepare revoke command in activate Extend logger to handle revoke Implement revoke command client side Run revoke on previously suceeded Control whether to override by flag Adhere profile configuration auto_rollback setting Cargo fmt Correctly provide profile path to activation script when revoking Document multi flake mode in README Resolve a typo in README.md Co-authored-by: notgne2 <gen2@gen2.space> Use existing teminology rename revoke_suceeded -> rollback_suceeded Use more open CLI argument name `targets` instead of `flakes` Document name changes in README Add sudo command support for revokes Call run_deploy with `dry_active` flag Test revoke commands contains sudo Set default temp_path in activate binary Require temp_path for wait and activate subcommands Add copyright comment Address review change requests Fix typo in README Co-authored-by: Alexander Bantyev <balsoft@balsoft.ru>
Diffstat (limited to '')
-rw-r--r--README.md19
-rw-r--r--src/bin/activate.rs40
-rw-r--r--src/bin/deploy.rs278
-rw-r--r--src/deploy.rs122
-rw-r--r--src/lib.rs82
5 files changed, 399 insertions, 142 deletions
diff --git a/README.md b/README.md
index c8a132f..acd2b7f 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,6 @@
<!--
SPDX-FileCopyrightText: 2020 Serokell <https://serokell.io/>
+SPDX-FileCopyrightText: 2021 Yannik Sander <contact@ysndr.de>
SPDX-License-Identifier: MPL-2.0
-->
@@ -16,18 +17,26 @@ Questions? Need help? Join us on Matrix: [`#deploy-rs:matrix.org`](https://matri
Basic usage: `deploy [options] <flake>`.
-The given flake can be just a source `my-flake`, or optionally specify the node to deploy `my-flake#my-node`, or specify a profile too `my-flake#my-node.my-profile`. If your profile or node name has a `.` in it, simply wrap it in quotes, and the flake path in quotes (to avoid shell escaping), for example `'my-flake."myserver.com".system'`.
+Using this method all profiles specified in the given `<flake>` will be deployed (taking into account the [`profilesOrder`](#node)).
+
+ Optionally the flake can be constrained to deploy just a single node (`my-flake#my-node`) or a profile (`my-flake#my-node.my-profile`).
+
+If your profile or node name has a . in it, simply wrap it in quotes, and the flake path in quotes (to avoid shell escaping), for example 'my-flake."myserver.com".system'.
+
+Any "extra" arguments will be passed into the Nix calls, so for instance to deploy an impure profile, you may use `deploy . -- --impure` (note the explicit flake path is necessary for doing this).
You can try out this tool easily with `nix run`:
- `nix run github:serokell/deploy-rs your-flake`
-Any "extra" arguments will be passed into the Nix calls, so for instance to deploy an impure profile, you may use `deploy . -- --impure` (note the explicit flake path is necessary for doing this).
+In you want to deploy multiple flakes or a subset of profiles with one invocation, instead of calling `deploy <flake>` you can issue `deploy --targets <flake> [<flake> ...]` where `<flake>` is supposed to take the same format as discussed before.
+
+Running in this mode, if any of the deploys fails, the deploy will be aborted and all successful deploys rolled back. `--rollback-succeeded false` can be used to override this behavior, otherwise the `auto-rollback` argument takes precedent.
If you require a signing key to push closures to your server, specify the path to it in the `LOCAL_KEY` environment variable.
Check out `deploy --help` for CLI flags! Remember to check there before making one-time changes to things like hostnames.
-There is also an `activate` binary though this should be ignored, it is only used internally and for testing/hacking purposes.
+There is also an `activate` binary though this should be ignored, it is only used internally (on the deployed system) and for testing/hacking purposes.
## Ideas
@@ -79,7 +88,7 @@ A basic example of a flake that works with `deploy-rs` and deploys a simple NixO
### Profile
-This is the core of how `deploy-rs` was designed, any number of these can run on a node, as any user (see further down for specifying user information). If you want to mimick the behaviour of traditional tools like NixOps or Morph, try just defining one `profile` called `system`, as root, containing a nixosSystem, and you can even similarly use [home-manager](https://github.com/nix-community/home-manager) on any non-privileged user.
+This is the core of how `deploy-rs` was designed, any number of these can run on a node, as any user (see further down for specifying user information). If you want to mimic the behaviour of traditional tools like NixOps or Morph, try just defining one `profile` called `system`, as root, containing a nixosSystem, and you can even similarly use [home-manager](https://github.com/nix-community/home-manager) on any non-privileged user.
```nix
{
@@ -128,7 +137,7 @@ This is the top level attribute containing all of the options for this tool
{
nodes = {
# Definition format shown above
- my-node = {};
+ my-node = {};
another-node = {};
};
diff --git a/src/bin/activate.rs b/src/bin/activate.rs
index d17f3a8..6e18652 100644
--- a/src/bin/activate.rs
+++ b/src/bin/activate.rs
@@ -1,5 +1,6 @@
// SPDX-FileCopyrightText: 2020 Serokell <https://serokell.io/>
// SPDX-FileCopyrightText: 2020 Andreas Fuchs <asf@boinkor.net>
+// SPDX-FileCopyrightText: 2021 Yannik Sander <contact@ysndr.de>
//
// SPDX-License-Identifier: MPL-2.0
@@ -33,10 +34,6 @@ struct Opts {
#[clap(long)]
log_dir: Option<String>,
- /// Path for any temporary files that may be needed during activation
- #[clap(long)]
- temp_path: String,
-
#[clap(subcommand)]
subcmd: SubCommand,
}
@@ -45,6 +42,7 @@ struct Opts {
enum SubCommand {
Activate(ActivateOpts),
Wait(WaitOpts),
+ Revoke(RevokeOpts),
}
/// Activate a profile
@@ -70,6 +68,10 @@ struct ActivateOpts {
/// Show what will be activated on the machines
#[clap(long)]
dry_activate: bool,
+
+ /// Path for any temporary files that may be needed during activation
+ #[clap(long)]
+ temp_path: String,
}
/// Activate a profile
@@ -77,6 +79,17 @@ struct ActivateOpts {
struct WaitOpts {
/// The closure to wait for
closure: String,
+
+ /// Path for any temporary files that may be needed during activation
+ #[clap(long)]
+ temp_path: String,
+}
+
+/// Activate a profile
+#[derive(Clap, Debug)]
+struct RevokeOpts {
+ /// The profile path to revoke
+ profile_path: String,
}
#[derive(Error, Debug)]
@@ -429,6 +442,16 @@ pub async fn activate(
Ok(())
}
+#[derive(Error, Debug)]
+pub enum RevokeError {
+ #[error("There was an error de-activating after an error was encountered: {0}")]
+ DeactivateError(#[from] DeactivateError),
+}
+async fn revoke(profile_path: String) -> Result<(), RevokeError> {
+ deactivate(profile_path.as_str()).await?;
+ Ok(())
+}
+
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Ensure that this process stays alive after the SSH connection dies
@@ -447,6 +470,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
match opts.subcmd {
SubCommand::Activate(_) => deploy::LoggerType::Activate,
SubCommand::Wait(_) => deploy::LoggerType::Wait,
+ SubCommand::Revoke(_) => deploy::LoggerType::Revoke,
},
)?;
@@ -455,7 +479,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
activate_opts.profile_path,
activate_opts.closure,
activate_opts.auto_rollback,
- opts.temp_path,
+ activate_opts.temp_path,
activate_opts.confirm_timeout,
activate_opts.magic_rollback,
activate_opts.dry_activate,
@@ -463,7 +487,11 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
.await
.map_err(|x| Box::new(x) as Box<dyn std::error::Error>),
- SubCommand::Wait(wait_opts) => wait(opts.temp_path, wait_opts.closure)
+ SubCommand::Wait(wait_opts) => wait(wait_opts.temp_path, wait_opts.closure)
+ .await
+ .map_err(|x| Box::new(x) as Box<dyn std::error::Error>),
+
+ SubCommand::Revoke(revoke_opts) => revoke(revoke_opts.profile_path)
.await
.map_err(|x| Box::new(x) as Box<dyn std::error::Error>),
};
diff --git a/src/bin/deploy.rs b/src/bin/deploy.rs
index 10e0552..4419ef1 100644
--- a/src/bin/deploy.rs
+++ b/src/bin/deploy.rs
@@ -1,4 +1,5 @@
// SPDX-FileCopyrightText: 2020 Serokell <https://serokell.io/>
+// SPDX-FileCopyrightText: 2021 Yannik Sander <contact@ysndr.de>
//
// SPDX-License-Identifier: MPL-2.0
@@ -7,6 +8,8 @@ use std::io::{stdin, stdout, Write};
use clap::Clap;
+use deploy::{DeployFlake, ParseFlakeError};
+use futures_util::stream::{StreamExt, TryStreamExt};
use log::{debug, error, info, warn};
use serde::Serialize;
use std::process::Stdio;
@@ -14,12 +17,16 @@ use thiserror::Error;
use tokio::process::Command;
/// Simple Rust rewrite of a simple Nix Flake deployment tool
-#[derive(Clap, Debug)]
+#[derive(Clap, Debug, Clone)]
#[clap(version = "1.0", author = "Serokell <https://serokell.io/>")]
struct Opts {
/// The flake to deploy
- #[clap(default_value = ".")]
- flake: String,
+ #[clap(group = "deploy")]
+ target: Option<String>,
+
+ /// A list of flakes to deploy alternatively
+ #[clap(long, group = "deploy")]
+ targets: Option<Vec<String>>,
/// Check signatures when using `nix copy`
#[clap(short, long)]
checksigs: bool,
@@ -77,6 +84,9 @@ struct Opts {
/// Show what will be activated on the machines
#[clap(long)]
dry_activate: bool,
+ /// Revoke all previously succeeded deploys when deploying multiple profiles
+ #[clap(long)]
+ rollback_succeeded: Option<bool>,
}
/// Returns if the available Nix installation supports flakes
@@ -159,9 +169,11 @@ enum GetDeploymentDataError {
/// Evaluates the Nix in the given `repo` and return the processed Data from it
async fn get_deployment_data(
supports_flakes: bool,
- flake: &deploy::DeployFlake<'_>,
+ flakes: &[deploy::DeployFlake<'_>],
extra_build_args: &[String],
-) -> Result<deploy::data::Data, GetDeploymentDataError> {
+) -> Result<Vec<deploy::data::Data>, GetDeploymentDataError> {
+ futures_util::stream::iter(flakes).then(|flake| async move {
+
info!("Evaluating flake in {}", flake.repo);
let mut c = if supports_flakes {
@@ -247,6 +259,7 @@ async fn get_deployment_data(
let data_json = String::from_utf8(build_output.stdout)?;
Ok(serde_json::from_str(&data_json)?)
+}).try_collect().await
}
#[derive(Serialize)]
@@ -259,11 +272,15 @@ struct PromptPart<'a> {
}
fn print_deployment(
- parts: &[(deploy::DeployData, deploy::DeployDefs)],
+ parts: &[(
+ &deploy::DeployFlake<'_>,
+ deploy::DeployData,
+ deploy::DeployDefs,
+ )],
) -> Result<(), toml::ser::Error> {
let mut part_map: HashMap<String, HashMap<String, PromptPart>> = HashMap::new();
- for (data, defs) in parts {
+ for (_, data, defs) in parts {
part_map
.entry(data.node_name.to_string())
.or_insert_with(HashMap::new)
@@ -298,7 +315,11 @@ enum PromptDeploymentError {
}
fn prompt_deployment(
- parts: &[(deploy::DeployData, deploy::DeployDefs)],
+ parts: &[(
+ &deploy::DeployFlake<'_>,
+ deploy::DeployData,
+ deploy::DeployDefs,
+ )],
) -> Result<(), PromptDeploymentError> {
print_deployment(parts)?;
@@ -363,109 +384,139 @@ enum RunDeployError {
TomlFormat(#[from] toml::ser::Error),
#[error("{0}")]
PromptDeployment(#[from] PromptDeploymentError),
+ #[error("Failed to revoke profile: {0}")]
+ RevokeProfile(#[from] deploy::deploy::RevokeProfileError),
}
type ToDeploy<'a> = Vec<(
+ &'a deploy::DeployFlake<'a>,
+ &'a deploy::data::Data,
(&'a str, &'a deploy::data::Node),
(&'a str, &'a deploy::data::Profile),
)>;
async fn run_deploy(
- deploy_flake: deploy::DeployFlake<'_>,
- data: deploy::data::Data,
+ deploy_flakes: Vec<deploy::DeployFlake<'_>>,
+ data: Vec<deploy::data::Data>,
supports_flakes: bool,
check_sigs: bool,
interactive: bool,
- cmd_overrides: deploy::CmdOverrides,
+ cmd_overrides: &deploy::CmdOverrides,
keep_result: bool,
result_path: Option<&str>,
extra_build_args: &[String],
debug_logs: bool,
- log_dir: Option<String>,
dry_activate: bool,
+ log_dir: &Option<String>,
+ rollback_succeeded: bool,
) -> Result<(), RunDeployError> {
- let to_deploy: ToDeploy = match (&deploy_flake.node, &deploy_flake.profile) {
- (Some(node_name), Some(profile_name)) => {
- let node = match data.nodes.get(node_name) {
- Some(x) => x,
- None => return Err(RunDeployError::NodeNotFound(node_name.to_owned())),
- };
- let profile = match node.node_settings.profiles.get(profile_name) {
- Some(x) => x,
- None => return Err(RunDeployError::ProfileNotFound(profile_name.to_owned())),
- };
-
- vec![((node_name, node), (profile_name, profile))]
- }
- (Some(node_name), None) => {
- let node = match data.nodes.get(node_name) {
- Some(x) => x,
- None => return Err(RunDeployError::NodeNotFound(node_name.to_owned())),
- };
-
- let mut profiles_list: Vec<(&str, &deploy::data::Profile)> = Vec::new();
-
- for profile_name in [
- node.node_settings.profiles_order.iter().collect(),
- node.node_settings.profiles.keys().collect::<Vec<&String>>(),
- ]
- .concat()
- {
- let profile = match node.node_settings.profiles.get(profile_name) {
- Some(x) => x,
- None => return Err(RunDeployError::ProfileNotFound(profile_name.to_owned())),
- };
-
- if !profiles_list.iter().any(|(n, _)| n == profile_name) {
- profiles_list.push((&profile_name, profile));
- }
- }
-
- profiles_list
- .into_iter()
- .map(|x| ((node_name.as_str(), node), x))
- .collect()
- }
- (None, None) => {
- let mut l = Vec::new();
-
- for (node_name, node) in &data.nodes {
- let mut profiles_list: Vec<(&str, &deploy::data::Profile)> = Vec::new();
-
- for profile_name in [
- node.node_settings.profiles_order.iter().collect(),
- node.node_settings.profiles.keys().collect::<Vec<&String>>(),
- ]
- .concat()
- {
+ let to_deploy: ToDeploy = deploy_flakes
+ .iter()
+ .zip(&data)
+ .map(|(deploy_flake, data)| {
+ let to_deploys: ToDeploy = match (&deploy_flake.node, &deploy_flake.profile) {
+ (Some(node_name), Some(profile_name)) => {
+ let node = match data.nodes.get(node_name) {
+ Some(x) => x,
+ None => Err(RunDeployError::NodeNotFound(node_name.to_owned()))?,
+ };
let profile = match node.node_settings.profiles.get(profile_name) {
Some(x) => x,
- None => {
- return Err(RunDeployError::ProfileNotFound(profile_name.to_owned()))
- }
+ None => Err(RunDeployError::ProfileNotFound(profile_name.to_owned()))?,
};
- if !profiles_list.iter().any(|(n, _)| n == profile_name) {
- profiles_list.push((&profile_name, profile));
- }
+ vec![(
+ &deploy_flake,
+ &data,
+ (node_name.as_str(), node),
+ (profile_name.as_str(), profile),
+ )]
}
+ (Some(node_name), None) => {
+ let node = match data.nodes.get(node_name) {
+ Some(x) => x,
+ None => return Err(RunDeployError::NodeNotFound(node_name.to_owned())),
+ };
- let ll: ToDeploy = profiles_list
- .into_iter()
- .map(|x| ((node_name.as_str(), node), x))
- .collect();
+ let mut profiles_list: Vec<(&str, &deploy::data::Profile)> = Vec::new();
+
+ for profile_name in [
+ node.node_settings.profiles_order.iter().collect(),
+ node.node_settings.profiles.keys().collect::<Vec<&String>>(),
+ ]
+ .concat()
+ {
+ let profile = match node.node_settings.profiles.get(profile_name) {
+ Some(x) => x,
+ None => {
+ return Err(RunDeployError::ProfileNotFound(
+ profile_name.to_owned(),
+ ))
+ }
+ };
+
+ if !profiles_list.iter().any(|(n, _)| n == profile_name) {
+ profiles_list.push((&profile_name, profile));
+ }
+ }
- l.extend(ll);
- }
+ profiles_list
+ .into_iter()
+ .map(|x| (deploy_flake, data, (node_name.as_str(), node), x))
+ .collect()
+ }
+ (None, None) => {
+ let mut l = Vec::new();
+
+ for (node_name, node) in &data.nodes {
+ let mut profiles_list: Vec<(&str, &deploy::data::Profile)> = Vec::new();
+
+ for profile_name in [
+ node.node_settings.profiles_order.iter().collect(),
+ node.node_settings.profiles.keys().collect::<Vec<&String>>(),
+ ]
+ .concat()
+ {
+ let profile = match node.node_settings.profiles.get(profile_name) {
+ Some(x) => x,
+ None => {
+ return Err(RunDeployError::ProfileNotFound(
+ profile_name.to_owned(),
+ ))
+ }
+ };
+
+ if !profiles_list.iter().any(|(n, _)| n == profile_name) {
+ profiles_list.push((&profile_name, profile));
+ }
+ }
- l
- }
- (None, Some(_)) => return Err(RunDeployError::ProfileWithoutNode),
- };
+ let ll: ToDeploy = profiles_list
+ .into_iter()
+ .map(|x| (deploy_flake, data, (node_name.as_str(), node), x))
+ .collect();
- let mut parts: Vec<(deploy::DeployData, deploy::DeployDefs)> = Vec::new();
+ l.extend(ll);
+ }
- for ((node_name, node), (profile_name, profile)) in to_deploy {
+ l
+ }
+ (None, Some(_)) => return Err(RunDeployError::ProfileWithoutNode),
+ };
+ Ok(to_deploys)
+ })
+ .collect::<Result<Vec<ToDeploy>, RunDeployError>>()?
+ .into_iter()
+ .flatten()
+ .collect();
+
+ let mut parts: Vec<(
+ &deploy::DeployFlake<'_>,
+ deploy::DeployData,
+ deploy::DeployDefs,
+ )> = Vec::new();
+
+ for (deploy_flake, data, (node_name, node), (profile_name, profile)) in to_deploy {
let deploy_data = deploy::make_deploy_data(
&data.generic_settings,
node,
@@ -479,7 +530,7 @@ async fn run_deploy(
let deploy_defs = deploy_data.defs()?;
- parts.push((deploy_data, deploy_defs));
+ parts.push((deploy_flake, deploy_data, deploy_defs));
}
if interactive {
@@ -488,7 +539,7 @@ async fn run_deploy(
print_deployment(&parts[..])?;
}
- for (deploy_data, deploy_defs) in &parts {
+ for (deploy_flake, deploy_data, deploy_defs) in &parts {
deploy::push::push_profile(deploy::push::PushProfileData {
supports_flakes,
check_sigs,
@@ -502,8 +553,32 @@ async fn run_deploy(
.await?;
}
- for (deploy_data, deploy_defs) in &parts {
- deploy::deploy::deploy_profile(&deploy_data, &deploy_defs, dry_activate).await?;
+ let mut succeeded: Vec<(&deploy::DeployData, &deploy::DeployDefs)> = vec![];
+
+ // Run all deployments
+ // In case of an error rollback any previoulsy made deployment.
+ // Rollbacks adhere to the global seeting to auto_rollback and secondary
+ // the profile's configuration
+ for (_, deploy_data, deploy_defs) in &parts {
+ if let Err(e) = deploy::deploy::deploy_profile(deploy_data, deploy_defs, dry_activate).await {
+ error!("{}", e);
+ if dry_activate {
+ info!("dry run, not rolling back");
+ }
+ info!("Revoking previous deploys");
+ if rollback_succeeded && cmd_overrides.auto_rollback.unwrap_or(true) {
+ // revoking all previous deploys
+ // (adheres to profile configuration if not set explicitely by
+ // the command line)
+ for (deploy_data, deploy_defs) in &succeeded {
+ if deploy_data.merged_settings.auto_rollback.unwrap_or(true) {
+ deploy::deploy::revoke(*deploy_data, *deploy_defs).await?;
+ }
+ }
+ }
+ break;
+ }
+ succeeded.push((deploy_data, deploy_defs))
}
Ok(())
@@ -538,7 +613,16 @@ async fn run() -> Result<(), RunError> {
deploy::LoggerType::Deploy,
)?;
- let deploy_flake = deploy::parse_flake(opts.flake.as_str())?;
+ let deploys = opts.clone().targets.unwrap_or_else(|| {
+ opts.clone()
+ .target
+ .unwrap_or(Some(vec![".".to_string()]))
+ });
+
+ let deploy_flakes: Vec<DeployFlake> = deploys
+ .iter()
+ .map(|f| deploy::parse_flake(f.as_str()))
+ .collect::<Result<Vec<DeployFlake>, ParseFlakeError>>()?;
let cmd_overrides = deploy::CmdOverrides {
ssh_user: opts.ssh_user,
@@ -560,26 +644,26 @@ async fn run() -> Result<(), RunError> {
}
if !opts.skip_checks {
- check_deployment(supports_flakes, deploy_flake.repo, &opts.extra_build_args).await?;
+ for deploy_flake in deploy_flakes.iter() {
+ check_deployment(supports_flakes, deploy_flake.repo, &opts.extra_build_args).await?;
+ }
}
-
- let data = get_deployment_data(supports_flakes, &deploy_flake, &opts.extra_build_args).await?;
-
let result_path = opts.result_path.as_deref();
-
+ let data = get_deployment_data(supports_flakes, &deploy_flakes, &opts.extra_build_args).await?;
run_deploy(
- deploy_flake,
+ deploy_flakes,
data,
supports_flakes,
opts.checksigs,
opts.interactive,
- cmd_overrides,
+ &cmd_overrides,
opts.keep_result,
result_path,
&opts.extra_build_args,
opts.debug_logs,
- opts.log_dir,
opts.dry_activate,
+ &opts.log_dir,
+ opts.rollback_succeeded.unwrap_or(true),
)
.await?;
diff --git a/src/deploy.rs b/src/deploy.rs
index 285bbbd..60297b5 100644
--- a/src/deploy.rs
+++ b/src/deploy.rs
@@ -1,5 +1,6 @@
// SPDX-FileCopyrightText: 2020 Serokell <https://serokell.io/>
// SPDX-FileCopyrightText: 2020 Andreas Fuchs <asf@boinkor.net>
+// SPDX-FileCopyrightText: 2021 Yannik Sander <contact@ysndr.de>
//
// SPDX-License-Identifier: MPL-2.0
@@ -8,6 +9,8 @@ use std::borrow::Cow;
use thiserror::Error;
use tokio::process::Command;
+use crate::DeployDataDefsError;
+
struct ActivateCommandData<'a> {
sudo: &'a Option<String>,
profile_path: &'a str,
@@ -33,8 +36,8 @@ fn build_activate_command(data: ActivateCommandData) -> String {
}
self_activate_command = format!(
- "{} --temp-path '{}' activate '{}' '{}'",
- self_activate_command, data.temp_path, data.closure, data.profile_path
+ "{} activate '{}' '{}' --temp-path '{}'",
+ self_activate_command, data.closure, data.profile_path, data.temp_path
);
self_activate_command = format!(
@@ -87,7 +90,7 @@ fn test_activation_command_builder() {
log_dir,
dry_activate
}),
- "sudo -u test /nix/store/blah/etc/activate-rs --debug-logs --log-dir /tmp/something.txt --temp-path '/tmp' activate '/nix/store/blah/etc' '/blah/profiles/test' --confirm-timeout 30 --magic-rollback --auto-rollback"
+ "sudo -u test /nix/store/blah/etc/activate-rs --debug-logs --log-dir /tmp/something.txt activate '/nix/store/blah/etc' '/blah/profiles/test' --temp-path '/tmp' --confirm-timeout 30 --magic-rollback --auto-rollback"
.to_string(),
);
}
@@ -112,8 +115,8 @@ fn build_wait_command(data: WaitCommandData) -> String {
}
self_activate_command = format!(
- "{} --temp-path '{}' wait '{}'",
- self_activate_command, data.temp_path, data.closure
+ "{} wait '{}' --temp-path '{}'",
+ self_activate_command, data.closure, data.temp_path,
);
if let Some(sudo_cmd) = &data.sudo {
@@ -139,7 +142,56 @@ fn test_wait_command_builder() {
debug_logs,
log_dir
}),
- "sudo -u test /nix/store/blah/etc/activate-rs --debug-logs --log-dir /tmp/something.txt --temp-path '/tmp' wait '/nix/store/blah/etc'"
+ "sudo -u test /nix/store/blah/etc/activate-rs --debug-logs --log-dir /tmp/something.txt wait '/nix/store/blah/etc' --temp-path '/tmp'"
+ .to_string(),
+ );
+}
+
+struct RevokeCommandData<'a> {
+ sudo: &'a Option<String>,
+ closure: &'a str,
+ profile_path: &'a str,
+ debug_logs: bool,
+ log_dir: Option<&'a str>,
+}
+
+fn build_revoke_command(data: RevokeCommandData) -> String {
+ let mut self_activate_command = format!("{}/activate-rs", data.closure);
+
+ if data.debug_logs {
+ self_activate_command = format!("{} --debug-logs", self_activate_command);
+ }
+
+ if let Some(log_dir) = data.log_dir {
+ self_activate_command = format!("{} --log-dir {}", self_activate_command, log_dir);
+ }
+
+ self_activate_command = format!("{} revoke '{}'", self_activate_command, data.profile_path);
+
+ if let Some(sudo_cmd) = &data.sudo {
+ self_activate_command = format!("{} {}", sudo_cmd, self_activate_command);
+ }
+
+ self_activate_command
+}
+
+#[test]
+fn test_revoke_command_builder() {
+ let sudo = Some("sudo -u test".to_string());
+ let closure = "/nix/store/blah/etc";
+ let profile_path = "/nix/var/nix/per-user/user/profile";
+ let debug_logs = true;
+ let log_dir = Some("/tmp/something.txt");
+
+ assert_eq!(
+ build_revoke_command(RevokeCommandData {
+ sudo: &sudo,
+ closure,
+ profile_path,
+ debug_logs,
+ log_dir
+ }),
+ "sudo -u test /nix/store/blah/etc/activate-rs --debug-logs --log-dir /tmp/something.txt revoke '/nix/var/nix/per-user/user/profile'"
.to_string(),
);
}
@@ -328,7 +380,6 @@ pub async fn deploy_profile(
send_activated.send(()).unwrap();
});
-
tokio::select! {
x = ssh_wait_command.arg(self_wait_command).status() => {
debug!("Wait command ended");
@@ -352,3 +403,60 @@ pub async fn deploy_profile(
Ok(())
}
+
+#[derive(Error, Debug)]
+pub enum RevokeProfileError {
+ #[error("Failed to spawn revocation command over SSH: {0}")]
+ SSHSpawnRevokeError(std::io::Error),
+
+ #[error("Error revoking deployment: {0}")]
+ SSHRevokeError(std::io::Error),
+ #[error("Revoking over SSH resulted in a bad exit code: {0:?}")]
+ SSHRevokeExitError(Option<i32>),
+
+ #[error("Deployment data invalid: {0}")]
+ InvalidDeployDataDefsError(#[from] DeployDataDefsError),
+}
+pub async fn revoke(
+ deploy_data: &crate::DeployData<'_>,
+ deploy_defs: &crate::DeployDefs,
+) -> Result<(), RevokeProfileError> {
+ let self_revoke_command = build_revoke_command(RevokeCommandData {
+ sudo: &deploy_defs.sudo,
+ closure: &deploy_data.profile.profile_settings.path,
+ profile_path: &deploy_data.get_profile_path()?,
+ debug_logs: deploy_data.debug_logs,
+ log_dir: deploy_data.log_dir,
+ });
+
+ debug!("Constructed revoke command: {}", self_revoke_command);
+
+ let hostname = match deploy_data.cmd_overrides.hostname {
+ Some(ref x) => x,
+ None => &deploy_data.node.node_settings.hostname,
+ };
+
+ let ssh_addr = format!("{}@{}", deploy_defs.ssh_user, hostname);
+
+ let mut ssh_activate_command = Command::new("ssh");
+ ssh_activate_command.arg(&ssh_addr);
+
+ for ssh_opt in &deploy_data.merged_settings.ssh_opts {
+ ssh_activate_command.arg(&ssh_opt);
+ }
+
+ let ssh_revoke = ssh_activate_command
+ .arg(self_revoke_command)
+ .spawn()
+ .map_err(RevokeProfileError::SSHSpawnRevokeError)?;
+
+ let result = ssh_revoke.wait_with_output().await;
+
+ match result {
+ Err(x) => Err(RevokeProfileError::SSHRevokeError(x)),
+ Ok(ref x) => match x.status.code() {
+ Some(0) => Ok(()),
+ a => Err(RevokeProfileError::SSHRevokeExitError(a)),
+ },
+ }
+}
diff --git a/src/lib.rs b/src/lib.rs
index a6b57aa..712b8b1 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1,5 +1,6 @@
// SPDX-FileCopyrightText: 2020 Serokell <https://serokell.io/>
// SPDX-FileCopyrightText: 2020 Andreas Fuchs <asf@boinkor.net>
+// SPDX-FileCopyrightText: 2021 Yannik Sander <contact@ysndr.de>
//
// SPDX-License-Identifier: MPL-2.0
@@ -59,6 +60,22 @@ pub fn logger_formatter_wait(
)
}
+pub fn logger_formatter_revoke(
+ w: &mut dyn std::io::Write,
+ _now: &mut DeferredNow,
+ record: &Record,
+) -> Result<(), std::io::Error> {
+ let level = record.level();
+
+ write!(
+ w,
+ "↩️ {} [revoke] [{}] {}",
+ make_emoji(level),
+ style(level, level.to_string()),
+ record.args()
+ )
+}
+
pub fn logger_formatter_deploy(
w: &mut dyn std::io::Write,
_now: &mut DeferredNow,
@@ -79,6 +96,7 @@ pub enum LoggerType {
Deploy,
Activate,
Wait,
+ Revoke,
}
pub fn init_logger(
@@ -90,6 +108,7 @@ pub fn init_logger(
LoggerType::Deploy => logger_formatter_deploy,
LoggerType::Activate => logger_formatter_activate,
LoggerType::Wait => logger_formatter_wait,
+ LoggerType::Revoke => logger_formatter_revoke,
};
if let Some(log_dir) = log_dir {
@@ -107,6 +126,7 @@ pub fn init_logger(
match logger_type {
LoggerType::Activate => logger = logger.discriminant("activate"),
LoggerType::Wait => logger = logger.discriminant("wait"),
+ LoggerType::Revoke => logger = logger.discriminant("revoke"),
LoggerType::Deploy => (),
}
@@ -324,19 +344,25 @@ impl<'a> DeployData<'a> {
None => whoami::username(),
};
- let profile_user = match self.merged_settings.user {
- Some(ref x) => x.clone(),
- None => match self.merged_settings.ssh_user {
- Some(ref x) => x.clone(),
- None => {
- return Err(DeployDataDefsError::NoProfileUser(
- self.profile_name.to_owned(),
- self.node_name.to_owned(),
- ))
- }
- },
+ let profile_user = self.get_profile_user()?;
+
+ let profile_path = self.get_profile_path()?;
+
+ let sudo: Option<String> = match self.merged_settings.user {
+ Some(ref user) if user != &ssh_user => Some(format!("sudo -u {}", user)),
+ _ => None,
};
+ Ok(DeployDefs {
+ ssh_user,
+ profile_user,
+ profile_path,
+ sudo,
+ })
+ }
+
+ fn get_profile_path(&'a self) -> Result<String, DeployDataDefsError> {
+ let profile_user = self.get_profile_user()?;
let profile_path = match self.profile.profile_settings.profile_path {
None => match &profile_user[..] {
"root" => format!("/nix/var/nix/profiles/{}", self.profile_name),
@@ -347,18 +373,23 @@ impl<'a> DeployData<'a> {
},
Some(ref x) => x.clone(),
};
+ Ok(profile_path)
+ }
- let sudo: Option<String> = match self.merged_settings.user {
- Some(ref user) if user != &ssh_user => Some(format!("sudo -u {}", user)),
- _ => None,
+ fn get_profile_user(&'a self) -> Result<String, DeployDataDefsError> {
+ let profile_user = match self.merged_settings.user {
+ Some(ref x) => x.clone(),
+ None => match self.merged_settings.ssh_user {
+ Some(ref x) => x.clone(),
+ None => {
+ return Err(DeployDataDefsError::NoProfileUser(
+ self.profile_name.to_owned(),
+ self.node_name.to_owned(),
+ ))
+ }
+ },
};
-
- Ok(DeployDefs {
- ssh_user,
- profile_user,
- profile_path,
- sudo,
- })
+ Ok(profile_user)
}
}
@@ -396,15 +427,12 @@ pub fn make_deploy_data<'a, 's>(
}
DeployData {
- profile,
- profile_name,
- node,
node_name,
-
+ node,
+ profile_name,
+ profile,
cmd_overrides,
-
merged_settings,
-
debug_logs,
log_dir,
}