Some checks failed
BlockStorage/repertory_osx/pipeline/head This commit looks good
BlockStorage/repertory_windows/pipeline/head This commit looks good
BlockStorage/repertory/pipeline/head There was a failure building this commit
BlockStorage/repertory_linux_builds/pipeline/head This commit looks good
BlockStorage/repertory_osx_builds/pipeline/head There was a failure building this commit
### Issues * \#1 \[bug\] Unable to mount S3 due to 'item_not_found' exception * \#2 Require bucket name for S3 mounts * \#3 \[bug\] File size is not being updated in S3 mount * \#4 Upgrade to libfuse-3.x.x * \#5 Switch to renterd for Sia support * \#6 Switch to cpp-httplib to further reduce dependencies * \#7 Remove global_data and calculate used disk space per provider * \#8 Switch to libcurl for S3 mount support ### Changes from v1.x.x * Added read-only encrypt provider * Pass-through mount point that transparently encrypts source data using `XChaCha20-Poly1305` * Added S3 encryption support via `XChaCha20-Poly1305` * Added replay protection to remote mounts * Added support base64 writes in remote FUSE * Created static linked Linux binaries for `amd64` and `aarch64` using `musl-libc` * Removed legacy Sia renter support * Removed Skynet support * Fixed multiple remote mount WinFSP API issues on \*NIX servers * Implemented chunked read and write * Writes for non-cached files are performed in chunks of 8Mib * Removed `repertory-ui` support * Removed `FreeBSD` support * Switched to `libsodium` over `CryptoPP` * Switched to `XChaCha20-Poly1305` for remote mounts * Updated `GoogleTest` to v1.14.0 * Updated `JSON for Modern C++` to v3.11.2 * Updated `OpenSSL` to v1.1.1w * Updated `RocksDB` to v8.5.3 * Updated `WinFSP` to 2023 * Updated `boost` to v1.78.0 * Updated `cURL` to v8.3.0 * Updated `zlib` to v1.3 * Use `upload_manager` for all providers * Adds a delay to uploads to prevent excessive API calls * Supports re-upload after mount restart for incomplete uploads * NOTE: Uploads for all providers are full file (no resume support) * Multipart upload support is planned for S3 Reviewed-on: #9
44 lines
1.3 KiB
C++
44 lines
1.3 KiB
C++
#include "pugixml.hpp"
|
|
|
|
#include <string.h>
|
|
#include <iostream>
|
|
|
|
int main()
|
|
{
|
|
pugi::xml_document doc;
|
|
if (!doc.load_string("<node id='123'>text</node><!-- comment -->", pugi::parse_default | pugi::parse_comments)) return -1;
|
|
|
|
// tag::node[]
|
|
pugi::xml_node node = doc.child("node");
|
|
|
|
// change node name
|
|
std::cout << node.set_name("notnode");
|
|
std::cout << ", new node name: " << node.name() << std::endl;
|
|
|
|
// change comment text
|
|
std::cout << doc.last_child().set_value("useless comment");
|
|
std::cout << ", new comment text: " << doc.last_child().value() << std::endl;
|
|
|
|
// we can't change value of the element or name of the comment
|
|
std::cout << node.set_value("1") << ", " << doc.last_child().set_name("2") << std::endl;
|
|
// end::node[]
|
|
|
|
// tag::attr[]
|
|
pugi::xml_attribute attr = node.attribute("id");
|
|
|
|
// change attribute name/value
|
|
std::cout << attr.set_name("key") << ", " << attr.set_value("345");
|
|
std::cout << ", new attribute: " << attr.name() << "=" << attr.value() << std::endl;
|
|
|
|
// we can use numbers or booleans
|
|
attr.set_value(1.234);
|
|
std::cout << "new attribute value: " << attr.value() << std::endl;
|
|
|
|
// we can also use assignment operators for more concise code
|
|
attr = true;
|
|
std::cout << "final attribute value: " << attr.value() << std::endl;
|
|
// end::attr[]
|
|
}
|
|
|
|
// vim:et
|