Some checks failed
BlockStorage/repertory_osx/pipeline/head This commit looks good
BlockStorage/repertory_windows/pipeline/head This commit looks good
BlockStorage/repertory/pipeline/head There was a failure building this commit
BlockStorage/repertory_linux_builds/pipeline/head This commit looks good
BlockStorage/repertory_osx_builds/pipeline/head There was a failure building this commit
### Issues * \#1 \[bug\] Unable to mount S3 due to 'item_not_found' exception * \#2 Require bucket name for S3 mounts * \#3 \[bug\] File size is not being updated in S3 mount * \#4 Upgrade to libfuse-3.x.x * \#5 Switch to renterd for Sia support * \#6 Switch to cpp-httplib to further reduce dependencies * \#7 Remove global_data and calculate used disk space per provider * \#8 Switch to libcurl for S3 mount support ### Changes from v1.x.x * Added read-only encrypt provider * Pass-through mount point that transparently encrypts source data using `XChaCha20-Poly1305` * Added S3 encryption support via `XChaCha20-Poly1305` * Added replay protection to remote mounts * Added support base64 writes in remote FUSE * Created static linked Linux binaries for `amd64` and `aarch64` using `musl-libc` * Removed legacy Sia renter support * Removed Skynet support * Fixed multiple remote mount WinFSP API issues on \*NIX servers * Implemented chunked read and write * Writes for non-cached files are performed in chunks of 8Mib * Removed `repertory-ui` support * Removed `FreeBSD` support * Switched to `libsodium` over `CryptoPP` * Switched to `XChaCha20-Poly1305` for remote mounts * Updated `GoogleTest` to v1.14.0 * Updated `JSON for Modern C++` to v3.11.2 * Updated `OpenSSL` to v1.1.1w * Updated `RocksDB` to v8.5.3 * Updated `WinFSP` to 2023 * Updated `boost` to v1.78.0 * Updated `cURL` to v8.3.0 * Updated `zlib` to v1.3 * Use `upload_manager` for all providers * Adds a delay to uploads to prevent excessive API calls * Supports re-upload after mount restart for incomplete uploads * NOTE: Uploads for all providers are full file (no resume support) * Multipart upload support is planned for S3 Reviewed-on: #9
75 lines
2.8 KiB
C++
75 lines
2.8 KiB
C++
/*
|
|
Copyright <2018-2023> <scott.e.graves@protonmail.com>
|
|
|
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
of this software and associated documentation files (the "Software"), to deal
|
|
in the Software without restriction, including without limitation the rights
|
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
copies of the Software, and to permit persons to whom the Software is
|
|
furnished to do so, subject to the following conditions:
|
|
|
|
The above copyright notice and this permission notice shall be included in all
|
|
copies or substantial portions of the Software.
|
|
|
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
SOFTWARE.
|
|
*/
|
|
#ifndef INCLUDE_COMM_CURL_CURL_REQUESTS_HTTP_REQUEST_BASE_HPP_
|
|
#define INCLUDE_COMM_CURL_CURL_REQUESTS_HTTP_REQUEST_BASE_HPP_
|
|
|
|
#include "types/repertory.hpp"
|
|
#include "utils/native_file.hpp"
|
|
|
|
namespace repertory::curl::requests {
|
|
using read_callback = size_t (*)(char *, size_t, size_t, void *);
|
|
|
|
using response_callback =
|
|
std::function<void(const data_buffer &data, long response_code)>;
|
|
|
|
struct read_file_info final {
|
|
stop_type &stop_requested;
|
|
native_file::native_file_ptr nf{};
|
|
std::uint64_t offset{};
|
|
};
|
|
|
|
inline const auto read_file_data = static_cast<read_callback>(
|
|
[](char *buffer, size_t size, size_t nitems, void *instream) -> size_t {
|
|
auto *rd = reinterpret_cast<read_file_info *>(instream);
|
|
std::size_t bytes_read{};
|
|
auto ret =
|
|
rd->nf->read_bytes(buffer, size * nitems, rd->offset, bytes_read);
|
|
if (ret) {
|
|
rd->offset += bytes_read;
|
|
}
|
|
return ret && not rd->stop_requested ? bytes_read : CURL_READFUNC_ABORT;
|
|
});
|
|
|
|
struct http_request_base {
|
|
virtual ~http_request_base() = default;
|
|
|
|
bool allow_timeout{};
|
|
std::optional<std::string> aws_service;
|
|
std::optional<std::string> decryption_token{};
|
|
http_headers headers{};
|
|
std::string path{};
|
|
query_parameters query{};
|
|
std::optional<http_range> range{};
|
|
std::optional<response_callback> response_handler;
|
|
std::optional<http_headers> response_headers;
|
|
std::optional<std::uint64_t> total_size{};
|
|
|
|
[[nodiscard]] virtual auto get_path() const -> std::string { return path; }
|
|
|
|
[[nodiscard]] virtual auto set_method(CURL *curl,
|
|
stop_type &stop_requested) const
|
|
-> bool = 0;
|
|
};
|
|
} // namespace repertory::curl::requests
|
|
|
|
#endif // INCLUDE_COMM_CURL_CURL_REQUESTS_HTTP_REQUEST_BASE_HPP_
|