Scott E. Graves f43c41f88a
Some checks failed
BlockStorage/repertory_osx/pipeline/head This commit looks good
BlockStorage/repertory_windows/pipeline/head This commit looks good
BlockStorage/repertory/pipeline/head There was a failure building this commit
BlockStorage/repertory_linux_builds/pipeline/head This commit looks good
BlockStorage/repertory_osx_builds/pipeline/head There was a failure building this commit
2.0.0-rc (#9)
### Issues

* \#1 \[bug\] Unable to mount S3 due to 'item_not_found' exception
* \#2 Require bucket name for S3 mounts
* \#3 \[bug\] File size is not being updated in S3 mount
* \#4 Upgrade to libfuse-3.x.x
* \#5 Switch to renterd for Sia support
* \#6 Switch to cpp-httplib to further reduce dependencies
* \#7 Remove global_data and calculate used disk space per provider
* \#8 Switch to libcurl for S3 mount support

### Changes from v1.x.x

* Added read-only encrypt provider
  * Pass-through mount point that transparently encrypts source data using `XChaCha20-Poly1305`
* Added S3 encryption support via `XChaCha20-Poly1305`
* Added replay protection to remote mounts
* Added support base64 writes in remote FUSE
* Created static linked Linux binaries for `amd64` and `aarch64` using `musl-libc`
* Removed legacy Sia renter support
* Removed Skynet support
* Fixed multiple remote mount WinFSP API issues on \*NIX servers
* Implemented chunked read and write
  * Writes for non-cached files are performed in chunks of 8Mib
* Removed `repertory-ui` support
* Removed `FreeBSD` support
* Switched to `libsodium` over `CryptoPP`
* Switched to `XChaCha20-Poly1305` for remote mounts
* Updated `GoogleTest` to v1.14.0
* Updated `JSON for Modern C++` to v3.11.2
* Updated `OpenSSL` to v1.1.1w
* Updated `RocksDB` to v8.5.3
* Updated `WinFSP` to 2023
* Updated `boost` to v1.78.0
* Updated `cURL` to v8.3.0
* Updated `zlib` to v1.3
* Use `upload_manager` for all providers
  * Adds a delay to uploads to prevent excessive API calls
  * Supports re-upload after mount restart for incomplete uploads
  * NOTE: Uploads for all providers are full file (no resume support)
    * Multipart upload support is planned for S3

Reviewed-on: #9
2023-10-29 06:55:59 +00:00

65 lines
1.7 KiB
C++

#include "pugixml.hpp"
#include <string.h>
#include <iostream>
// tag::code[]
bool load_preprocess(pugi::xml_document& doc, const char* path);
bool preprocess(pugi::xml_node node)
{
for (pugi::xml_node child = node.first_child(); child; )
{
if (child.type() == pugi::node_pi && strcmp(child.name(), "include") == 0)
{
pugi::xml_node include = child;
// load new preprocessed document (note: ideally this should handle relative paths)
const char* path = include.value();
pugi::xml_document doc;
if (!load_preprocess(doc, path)) return false;
// insert the comment marker above include directive
node.insert_child_before(pugi::node_comment, include).set_value(path);
// copy the document above the include directive (this retains the original order!)
for (pugi::xml_node ic = doc.first_child(); ic; ic = ic.next_sibling())
{
node.insert_copy_before(ic, include);
}
// remove the include node and move to the next child
child = child.next_sibling();
node.remove_child(include);
}
else
{
if (!preprocess(child)) return false;
child = child.next_sibling();
}
}
return true;
}
bool load_preprocess(pugi::xml_document& doc, const char* path)
{
pugi::xml_parse_result result = doc.load_file(path, pugi::parse_default | pugi::parse_pi); // for <?include?>
return result ? preprocess(doc) : false;
}
// end::code[]
int main()
{
pugi::xml_document doc;
if (!load_preprocess(doc, "character.xml")) return -1;
doc.print(std::cout);
}
// vim:et