mirror of
https://github.com/zerotier/ZeroTierOne.git
synced 2025-03-12 04:36:29 -07:00
* add note about forceTcpRelay * Create a sample systemd unit for tcp proxy * set gitattributes for rust & cargo so hashes dont conflict on Windows * Revert "set gitattributes for rust & cargo so hashes dont conflict on Windows" This reverts commit 032dc5c108195f6bbc2e224f00da5b785df4b7f9. * Turn off autocrlf for rust source Doesn't appear to play nice well when it comes to git and vendored cargo package hashes * Fix #1883 (#1886) Still unknown as to why, but the call to `nc->GetProperties()` can fail when setting a friendly name on the Windows virtual ethernet adapter. Ensure that `ncp` is not null before continuing and accessing the device GUID. * Don't vendor packages for zeroidc (#1885) * Added docker environment way to join networks (#1871) * add StringUtils * fix headers use recommended headers and remove unused headers * move extern "C" only JNI functions need to be exported * cleanup * fix ANDROID-50: RESULT_ERROR_BAD_PARAMETER typo * fix typo in log message * fix typos in JNI method signatures * fix typo * fix ANDROID-51: fieldName is uninitialized * fix ANDROID-35: memory leak * fix missing DeleteLocalRef in loops * update to use unique error codes * add GETENV macro * add LOG_TAG defines * ANDROID-48: add ZT_jnicache.cpp * ANDROID-48: use ZT_jnicache.cpp and remove ZT_jnilookup.cpp and ZT_jniarray.cpp * add Event.fromInt * add PeerRole.fromInt * add ResultCode.fromInt * fix ANDROID-36: issues with ResultCode * add VirtualNetworkConfigOperation.fromInt * fix ANDROID-40: VirtualNetworkConfigOperation out-of-sync with ZT_VirtualNetworkConfigOperation enum * add VirtualNetworkStatus.fromInt * fix ANDROID-37: VirtualNetworkStatus out-of-sync with ZT_VirtualNetworkStatus enum * add VirtualNetworkType.fromInt * make NodeStatus a plain data class * fix ANDROID-52: synchronization bug with nodeMap * Node init work: separate Node construction and init * add Node.toString * make PeerPhysicalPath a plain data class * remove unused PeerPhysicalPath.fixed * add array functions * make Peer a plain data class * make Version a plain data class * fix ANDROID-42: copy/paste error * fix ANDROID-49: VirtualNetworkConfig.equals is wrong * reimplement VirtualNetworkConfig.equals * reimplement VirtualNetworkConfig.compareTo * add VirtualNetworkConfig.hashCode * make VirtualNetworkConfig a plain data class * remove unused VirtualNetworkConfig.enabled * reimplement VirtualNetworkDNS.equals * add VirtualNetworkDNS.hashCode * make VirtualNetworkDNS a plain data class * reimplement VirtualNetworkRoute.equals * reimplement VirtualNetworkRoute.compareTo * reimplement VirtualNetworkRoute.toString * add VirtualNetworkRoute.hashCode * make VirtualNetworkRoute a plain data class * add isSocketAddressEmpty * add addressPort * add fromSocketAddressObject * invert logic in a couple of places and return early * newInetAddress and newInetSocketAddress work allow newInetSocketAddress to return NULL if given empty address * fix ANDROID-38: stack corruption in onSendPacketRequested * use GETENV macro * JniRef work JniRef does not use callbacks struct, so remove fix NewGlobalRef / DeleteGlobalRef mismatch * use PRId64 macros * switch statement work * comments and logging * Modifier 'public' is redundant for interface members * NodeException can be made a checked Exception * 'NodeException' does not define a 'serialVersionUID' field * 'finalize()' should not be overridden this is fine to do because ZeroTierOneService calls close() when it is done * error handling, error reporting, asserts, logging * simplify loadLibrary * rename Node.networks -> Node.networkConfigs * Windows file permissions fix (#1887) * Allow macOS interfaces to use multiple IP addresses (#1879) Co-authored-by: Sean OMeara <someara@users.noreply.github.com> Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * Fix condition where full HELLOs might not be sent when necessary (#1877) Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * 1.10.4 version bumps * Add security policy to repo (#1889) * [+] add e2k64 arch (#1890) * temp fix for ANDROID-56: crash inside newNetworkConfig from too many args * 1.10.4 release notes * Windows 1.10.4 Advanced Installer bump * Revert "temp fix for ANDROID-56: crash inside newNetworkConfig from too many args" This reverts commit dd627cd7f44ad623a110bb14f72d0bea72a09e30. * actual fix for ANDROID-56: crash inside newNetworkConfig cast all arguments to varargs functions as good style * Fix addIp being called with applied ips (#1897) This was getting called outside of the check for existing ips Because of the added ifdef and a brace getting moved to the wrong place. ``` if (! n.tap()->addIp(*ip)) { fprintf(stderr, "ERROR: unable to add ip address %s" ZT_EOL_S, ip->toString(ipbuf)); } WinFWHelper::newICMPRule(*ip, n.config().nwid); ``` * 1.10.5 (#1905) * 1.10.5 bump * 1.10.5 for Windows * 1.10.5 * Prevent path-learning loops (#1914) * Prevent path-learning loops * Only allow new overwrite if not bonded * fix binding temporary ipv6 addresses on macos (#1910) The check code wasn't running. I don't know why !defined(TARGET_OS_IOS) would exclude code on desktop macOS. I did a quick search and changed it to defined(TARGET_OS_MAC). Not 100% sure what the most correct solution there is. You can verify the old and new versions with `ifconfig | grep temporary` plus `zerotier-cli info -j` -> listeningOn * 1.10.6 (#1929) * 1.10.5 bump * 1.10.6 * 1.10.6 AIP for Windows. * Release notes for 1.10.6 (#1931) * Minor tweak to Synology Docker image script (#1936) * Change if_def again so ios can build (#1937) All apple's variables are "defined" but sometimes they are defined as "0" * move begin/commit into try/catch block (#1932) Thread was exiting in some cases * Bump openssl from 0.10.45 to 0.10.48 in /zeroidc (#1938) Bumps [openssl](https://github.com/sfackler/rust-openssl) from 0.10.45 to 0.10.48. - [Release notes](https://github.com/sfackler/rust-openssl/releases) - [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.45...openssl-v0.10.48) --- updated-dependencies: - dependency-name: openssl dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * new drone bits * Fix multiple network join from environment entrypoint.sh.release (#1961) * _bond_m guards _bond, not _paths_m (#1965) * Fix: warning: mutex '_aqm_m' is not held on every path through here [-Wthread-safety-analysis] (#1964) * Bump h2 from 0.3.16 to 0.3.17 in /zeroidc (#1963) Bumps [h2](https://github.com/hyperium/h2) from 0.3.16 to 0.3.17. - [Release notes](https://github.com/hyperium/h2/releases) - [Changelog](https://github.com/hyperium/h2/blob/master/CHANGELOG.md) - [Commits](https://github.com/hyperium/h2/compare/v0.3.16...v0.3.17) --- updated-dependencies: - dependency-name: h2 dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * Add note that binutils is required on FreeBSD (#1968) * Add prometheus metrics for Central controllers (#1969) * add header-only prometheus lib to ext * rename folder * Undo rename directory * prometheus simpleapi included on mac & linux * wip * wire up some controller stats * Get windows building with prometheus * bsd build flags for prometheus * Fix multiple network join from environment entrypoint.sh.release (#1961) * _bond_m guards _bond, not _paths_m (#1965) * Fix: warning: mutex '_aqm_m' is not held on every path through here [-Wthread-safety-analysis] (#1964) * Serve prom metrics from /metrics endpoint * Add prom metrics for Central controller specific things * reorganize metric initialization * testing out a labled gauge on Networks * increment error counter on throw * Consolidate metrics definitions Put all metric definitions into node/Metrics.hpp. Accessed as needed from there. * Revert "testing out a labled gauge on Networks" This reverts commit 499ed6d95e11452019cdf48e32ed4cd878c2705b. * still blows up but adding to the record for completeness right now * Fix runtime issues with metrics * Add metrics files to visual studio project * Missed an "extern" * add copyright headers to new files * Add metrics for sent/received bytes (total) * put /metrics endpoint behind auth * sendto returns int on Win32 --------- Co-authored-by: Leonardo Amaral <leleobhz@users.noreply.github.com> Co-authored-by: Brenton Bostick <bostick@gmail.com> * Central startup update (#1973) * allow specifying authtoken in central startup * set allowManagedFrom * move redis_mem_notification to the correct place * add node checkins metric * wire up min/max connection pool size metrics * x86_64-unknown-linux-gnu on ubuntu runner (#1975) * adding incoming zt packet type metrics (#1976) * use cpp-httplib for HTTP control plane (#1979) refactored the old control plane code to use [cpp-httplib](https://github.com/yhirose/cpp-httplib) instead of a hand rolled HTTP server. Makes the control plane code much more legible. Also no longer randomly stops responding. * Outgoing Packet Metrics (#1980) add tx/rx labels to packet counters and add metrics for outgoing packets * Add short-term validation test workflow (#1974) Add short-term validation test workflow * Brenton/curly braces (#1971) * fix formatting * properly adjust various lines breakup multiple statements onto multiple lines * insert {} around if, for, etc. * Fix rust dependency caching (#1983) * fun with rust caching * kick * comment out invalid yaml keys for now * Caching should now work * re-add/rename key directives * bump * bump * bump * Don't force rebuild on Windows build GH Action (#1985) Switching `/t:ZeroTierOne:Rebuild` to just `/t:ZeroTierOne` allows the Windows build to use the rust cache. `/t:ZeroTierOne:Rebuild` cleared the cache before building. * More packet metrics (#1982) * found path negotation sends that weren't accounted for * Fix histogram so it will actually compile * Found more places for packet metrics * separate the bind & listen calls on the http backplane (#1988) * fix memory leak (#1992) * fix a couple of metrics (#1989) * More aggressive CLI spamming (#1993) * fix type signatures (#1991) * Network-metrics (#1994) * Add a couple quick functions for converting a uint64_t network ID/node ID into std::string * Network metrics * Peer metrics (#1995) * Adding peer metrics still need to be wired up for use * per peer packet metrics * Fix crash from bad instantiation of histogram * separate alive & dead path counts * Add peer metric update block * add peer latency values in doPingAndKeepalive * prevent deadlock * peer latency histogram actually works now * cleanup * capture counts of packets to specific peers --------- Co-authored-by: Joseph Henry <joseph.henry@zerotier.com> * Metrics consolidation (#1997) * Rename zt_packet_incoming -> zt_packet Also consolidate zt_peer_packets into a single metric with tx and rx labels. Same for ztc_tcp_data and ztc_udp_data * Further collapse tcp & udp into metric labels for zt_data * Fix zt_data metric description * zt_peer_packets description fix * Consolidate incoming/outgoing network packets to a single metric * zt_incoming_packet_error -> zt_packet_error * Disable peer metrics for central controllers Can change in the future if needed, but given the traffic our controllers serve, that's going to be a *lot* of data * Disable peer metrics for controllers pt 2 * Update readme files for metrics (#2000) * Controller Metrics & Network Config Request Fix (#2003) * add new metrics for network config request queue size and sso expirations * move sso expiration to its own thread in the controller * fix potential undefined behavior when modifying a set * Enable RTTI in Windows build The new prometheus histogram stuff needs it. Access violation - no RTTI data!INVALID packet 636ebd9ee8cac6c0 from cafe9efeb9(2605:9880:200:1200:30:571:e34:51/9993) (unexpected exception in tryDecode()) * Don't re-apply routes on BSD See issue #1986 * Capture setContent by-value instead of by-reference (#2006) Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * fix typos (#2010) * central controller metrics & request path updates (#2012) * internal db metrics * use shared mutexes for read/write locks * remove this lock. only used for a metric * more metrics * remove exploratory metrics place controller request benchmarks behind ifdef * Improve validation test (#2013) * fix init order for EmbeddedNetworkController (#2014) * add constant for getifaddrs cache time * cache getifaddrs - mac * cache getifaddrs - linux * cache getifaddrs - bsd * cache getifaddrs - windows * Fix oidc client lookup query join condition referenced the wrong table. Worked fine unless there were multiple identical client IDs * Fix udp sent metric was only incrementing by 1 for each packet sent * Allow sending all surface addresses to peer in low-bandwidth mode * allow enabling of low bandwidth mode on controllers * don't unborrow bad connections pool will clean them up later * Multi-arch controller container (#2037) create arm64 & amd64 images for central controller * Update README.md issue #2009 * docker tags change * fix oidc auth url memory leak (#2031) getAuthURL() was not calling zeroidc::free_cstr(url); the only place authAuthURL is called, the url can be retrieved from the network config instead. You could alternatively copy the string and call free_cstr in getAuthURL. If that's better we can change the PR. Since now there are no callers of getAuthURL I deleted it. Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * Bump openssl from 0.10.48 to 0.10.55 in /zeroidc (#2034) Bumps [openssl](https://github.com/sfackler/rust-openssl) from 0.10.48 to 0.10.55. - [Release notes](https://github.com/sfackler/rust-openssl/releases) - [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.48...openssl-v0.10.55) --- updated-dependencies: - dependency-name: openssl dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * zeroidc cargo warnings (#2029) * fix unused struct member cargo warning * fix unused import cargo warning * fix unused return value cargo warning --------- Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * fix memory leak in macos ipv6/dns helper (#2030) Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> * Consider ZEROTIER_JOIN_NETWORKS in healthcheck (#1978) * Add a 2nd auth token only for access to /metrics (#2043) * Add a 2nd auth token for /metrics Allows administrators to distribute a token that only has access to read metrics and nothing else. Also added support for using bearer auth tokens for both types of tokens Separate endpoint for metrics #2041 * Update readme * fix a couple of cases of writing the wrong token * Add warning to cli for allow default on FreeBSD It doesn't work. Not possible to fix with deficient network stack and APIs. ZeroTierOne-freebsd # zerotier-cli set 9bee8941b5xxxxxx allowDefault=1 400 set Allow Default does not work properly on FreeBSD. See #580 root@freebsd13-a:~/ZeroTierOne-freebsd # zerotier-cli get 9bee8941b5xxxxxx allowDefault 1 * ARM64 Support for TapDriver6 (#1949) * Release memory previously allocated by UPNP_GetValidIGD * Fix ifdef that breaks libzt on iOS (#2050) * less drone (#2060) * Exit if loading an invalid identity from disk (#2058) * Exit if loading an invalid identity from disk Previously, if an invalid identity was loaded from disk, ZeroTier would generate a new identity & chug along and generate a brand new identity as if nothing happened. When running in containers, this introduces the possibility for key matter loss; especially when running in containers where the identity files are mounted in the container read only. In this case, ZT will continue chugging along with a brand new identity with no possibility of recovering the private key. ZeroTier should exit upon loading of invalid identity.public/identity.secret #2056 * add validation test for #2056 * tcp-proxy: fix build * Adjust tcp-proxy makefile to support metrics There's no way to get the metrics yet. Someone will have to add the http service. * remove ZT_NO_METRIC ifdef * Implement recvmmsg() for Linux to reduce syscalls. (#2046) Between 5% and 40% speed improvement on Linux, depending on system configuration and load. * suppress warnings: comparison of integers of different signs: 'int64_t' (aka 'long') and 'uint64_t' (aka 'unsigned long') [-Wsign-compare] (#2063) * fix warning: 'OS_STRING' macro redefined [-Wmacro-redefined] (#2064) Even though this is in ext, these particular chunks of code were added by us, so are ok to modify. * Apply default route a different way - macOS The original way we applied default route, by forking 0.0.0.0/0 into 0/1 and 128/1 works, but if mac os has any networking hiccups -if you change SSIDs or sleep/wake- macos erases the system default route. And then all networking on the computer is broken. to summarize the new way: allowDefault=1 ``` sudo route delete default 192.168.82.1 sudo route add default 10.2.0.2 sudo route add -ifscope en1 default 192.168.82.1 ``` gives us this routing table ``` Destination Gateway RT_IFA Flags Refs Use Mtu Netif Expire rtt(ms) rttvar(ms) default 10.2.0.2 10.2.0.18 UGScg 90 1 2800 feth4823 default 192.168.82.1 192.168.82.217 UGScIg ``` allowDefault=0 ``` sudo route delete default sudo route delete -ifscope en1 default sudo route add default 192.168.82.1 ``` Notice the I flag, for -ifscope, on the physical default route. route change does not seem to work reliably. * fix docker tag for controllers (#2066) * Update build.sh (#2068) fix mkwork compilation errors * Fix network DNS on macOS It stopped working for ipv4 only networks in Monterey. See #1696 We add some config like so to System Configuration ``` scutil show State:/Network/Service/9bee8941b5xxxxxx/IPv4 <dictionary> { Addresses : <array> { 0 : 10.2.1.36 } InterfaceName : feth4823 Router : 10.2.1.36 ServerAddress : 127.0.0.1 } ``` * Add search domain to macos dns configuration Stumbled upon this while debugging something else. If we add search domain to our system configuration for network DNS, then search domains work: ``` ping server1 ~ PING server1.my.domain (10.123.3.1): 56 data bytes 64 bytes from 10.123.3.1 ``` * Fix reporting of secondaryPort and tertiaryPort See: #2039 * Fix typos (#2075) * Disable executable stacks on assembly objects (#2071) Add `--noexecstack` to the assembler flags so the resulting binary will link with a non-executable stack. Fixes zerotier/ZeroTierOne#1179 Co-authored-by: Joseph Henry <joseph.henry@zerotier.com> * Test that starting zerotier before internet works * Don't skip hellos when there are no paths available working on #2082 * Update validate-1m-linux.sh * Save zt node log files on abort * Separate test and summary step in validator script * Don't apply default route until zerotier is "online" I was running into issues with restarting the zerotier service while "full tunnel" mode is enabled. When zerotier first boots, it gets network state from the cache on disk. So it immediately applies all the routes it knew about before it shutdown. The network config may have change in this time. If it has, then your default route is via a route you are blocked from talking on. So you can't get the current network config, so your internet does not work. Other options include - don't use cached network state on boot - find a better criteria than "online" * Fix node time-to-online counter in validator script * Export variables so that they are accessible by exit function * Fix PortMapper issue on ZeroTier startup See issue #2082 We use a call to libnatpmp::ininatpp to make sure the computer has working network sockets before we go into the main nat-pmp/upnp logic. With basic exponenetial delay up to 30 seconds. * testing * Comment out PortMapper debug this got left turned on in a confusing merge previously * fix macos default route again see commit fb6af1971 * Fix network DNS on macOS adding that stuff to System Config causes this extra route to be added which breaks ipv4 default route. We figured out a weird System Coniguration setting that works. --- old couldn't figure out how to fix it in SystemConfiguration so here we are# Please enter the commit message for your changes. Lines starting We also moved the dns setter to before the syncIps stuff to help with a race condition. It didn't always work when you re-joined a network with default route enabled. * Catch all conditions in switch statement, remove trailing whitespaces * Add setmtu command, fix bond lifetime issue * Basic cleanups * Check if null is passed to VirtualNetworkConfig.equals and name fixes * ANDROID-96: Simplify and use return code from node_init directly * Windows arm64 (#2099) * ARM64 changes for 1.12 * 1.12 Windows advanced installer updates and updates for ARM64 * 1.12.0 * Linux build fixes for old distros. * release notes --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: travis laduke <travisladuke@gmail.com> Co-authored-by: Grant Limberg <grant.limberg@zerotier.com> Co-authored-by: Grant Limberg <glimberg@users.noreply.github.com> Co-authored-by: Leonardo Amaral <leleobhz@users.noreply.github.com> Co-authored-by: Brenton Bostick <bostick@gmail.com> Co-authored-by: Sean OMeara <someara@users.noreply.github.com> Co-authored-by: Joseph Henry <joseph-henry@users.noreply.github.com> Co-authored-by: Roman Peshkichev <roman.peshkichev@gmail.com> Co-authored-by: Joseph Henry <joseph.henry@zerotier.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Stavros Kois <47820033+stavros-k@users.noreply.github.com> Co-authored-by: Jake Vis <jakevis@outlook.com> Co-authored-by: Jörg Thalheim <joerg@thalheim.io> Co-authored-by: lison <imlison@foxmail.com> Co-authored-by: Kenny MacDermid <kenny@macdermid.ca>
352 lines
13 KiB
C++
352 lines
13 KiB
C++
/* Binary Large Objects interface.
|
|
*
|
|
* Read or write large objects, stored in their own storage on the server.
|
|
*
|
|
* DO NOT INCLUDE THIS FILE DIRECTLY; include pqxx/largeobject instead.
|
|
*
|
|
* Copyright (c) 2000-2022, Jeroen T. Vermeulen.
|
|
*
|
|
* See COPYING for copyright license. If you did not receive a file called
|
|
* COPYING with this source code, please notify the distributor of this
|
|
* mistake, or contact the author.
|
|
*/
|
|
#ifndef PQXX_H_BLOB
|
|
#define PQXX_H_BLOB
|
|
|
|
#if !defined(PQXX_HEADER_PRE)
|
|
# error "Include libpqxx headers as <pqxx/header>, not <pqxx/header.hxx>."
|
|
#endif
|
|
|
|
#include <cstdint>
|
|
|
|
#if defined(PQXX_HAVE_PATH)
|
|
# include <filesystem>
|
|
#endif
|
|
|
|
#if defined(PQXX_HAVE_RANGES) && __has_include(<ranges>)
|
|
# include <ranges>
|
|
#endif
|
|
|
|
#if defined(PQXX_HAVE_SPAN) && __has_include(<span>)
|
|
# include <span>
|
|
#endif
|
|
|
|
#include "pqxx/dbtransaction.hxx"
|
|
|
|
|
|
namespace pqxx
|
|
{
|
|
/** Binary large object.
|
|
*
|
|
* This is how you store data that may be too large for the `BYTEA` type.
|
|
* Access operations are similar to those for a file: you can read, write,
|
|
* query or set the current reading/writing position, and so on.
|
|
*
|
|
* These large objects live in their own storage on the server, indexed by an
|
|
* integer object identifier ("oid").
|
|
*
|
|
* Two `blob` objects may refer to the same actual large object in the
|
|
* database at the same time. Each will have its own reading/writing position,
|
|
* but writes to the one will of course affect what the other sees.
|
|
*/
|
|
class PQXX_LIBEXPORT blob
|
|
{
|
|
public:
|
|
/// Create a new, empty large object.
|
|
/** You may optionally specify an oid for the new blob. If you do, then
|
|
* the new object will have that oid -- or creation will fail if there
|
|
* already is an object with that oid.
|
|
*/
|
|
[[nodiscard]] static oid create(dbtransaction &, oid = 0);
|
|
|
|
/// Delete a large object, or fail if it does not exist.
|
|
static void remove(dbtransaction &, oid);
|
|
|
|
/// Open blob for reading. Any attempt to write to it will fail.
|
|
[[nodiscard]] static blob open_r(dbtransaction &, oid);
|
|
// Open blob for writing. Any attempt to read from it will fail.
|
|
[[nodiscard]] static blob open_w(dbtransaction &, oid);
|
|
// Open blob for reading and/or writing.
|
|
[[nodiscard]] static blob open_rw(dbtransaction &, oid);
|
|
|
|
/// You can default-construct a blob, but it won't do anything useful.
|
|
/** Most operations on a default-constructed blob will throw @ref
|
|
* usage_error.
|
|
*/
|
|
blob() = default;
|
|
|
|
/// You can move a blob, but not copy it. The original becomes unusable.
|
|
blob(blob &&);
|
|
/// You can move a blob, but not copy it. The original becomes unusable.
|
|
blob &operator=(blob &&);
|
|
|
|
blob(blob const &) = delete;
|
|
blob &operator=(blob const &) = delete;
|
|
~blob();
|
|
|
|
/// Maximum number of bytes that can be read or written at a time.
|
|
/** The underlying protocol only supports reads and writes up to 2 GB
|
|
* exclusive.
|
|
*
|
|
* If you need to read or write more data to or from a binary large object,
|
|
* you'll have to break it up into chunks.
|
|
*/
|
|
static constexpr std::size_t chunk_limit = 0x7fffffff;
|
|
|
|
/// Read up to `size` bytes of the object into `buf`.
|
|
/** Uses a buffer that you provide, resizing it as needed. If it suits you,
|
|
* this lets you allocate the buffer once and then re-use it multiple times.
|
|
*
|
|
* Resizes `buf` as needed.
|
|
*
|
|
* @warning The underlying protocol only supports reads up to 2GB at a time.
|
|
* If you need to read more, try making repeated calls to @ref append_to_buf.
|
|
*/
|
|
std::size_t read(std::basic_string<std::byte> &buf, std::size_t size);
|
|
|
|
#if defined(PQXX_HAVE_SPAN)
|
|
/// Read up to `std::size(buf)` bytes from the object.
|
|
/** Retrieves bytes from the blob, at the current position, until `buf` is
|
|
* full or there are no more bytes to read, whichever comes first.
|
|
*
|
|
* Returns the filled portion of `buf`. This may be empty.
|
|
*/
|
|
template<std::size_t extent = std::dynamic_extent>
|
|
std::span<std::byte> read(std::span<std::byte, extent> buf)
|
|
{
|
|
return buf.subspan(0, raw_read(std::data(buf), std::size(buf)));
|
|
}
|
|
#endif // PQXX_HAVE_SPAN
|
|
|
|
#if defined(PQXX_HAVE_CONCEPTS) && defined(PQXX_HAVE_SPAN)
|
|
/// Read up to `std::size(buf)` bytes from the object.
|
|
/** Retrieves bytes from the blob, at the current position, until `buf` is
|
|
* full or there are no more bytes to read, whichever comes first.
|
|
*
|
|
* Returns the filled portion of `buf`. This may be empty.
|
|
*/
|
|
template<binary DATA> std::span<std::byte> read(DATA &buf)
|
|
{
|
|
return {std::data(buf), raw_read(std::data(buf), std::size(buf))};
|
|
}
|
|
#else // PQXX_HAVE_CONCEPTS && PQXX_HAVE_SPAN
|
|
/// Read up to `std::size(buf)` bytes from the object.
|
|
/** @deprecated As libpqxx moves to C++20 as its baseline language version,
|
|
* this will take and return `std::span<std::byte>`.
|
|
*
|
|
* Retrieves bytes from the blob, at the current position, until `buf` is
|
|
* full (i.e. its current size is reached), or there are no more bytes to
|
|
* read, whichever comes first.
|
|
*
|
|
* This function will not change either the size or the capacity of `buf`,
|
|
* only its contents.
|
|
*
|
|
* Returns the filled portion of `buf`. This may be empty.
|
|
*/
|
|
template<typename ALLOC>
|
|
std::basic_string_view<std::byte> read(std::vector<std::byte, ALLOC> &buf)
|
|
{
|
|
return {std::data(buf), raw_read(std::data(buf), std::size(buf))};
|
|
}
|
|
#endif // PQXX_HAVE_CONCEPTS && PQXX_HAVE_SPAN
|
|
|
|
#if defined(PQXX_HAVE_CONCEPTS)
|
|
/// Write `data` to large object, at the current position.
|
|
/** If the writing position is at the end of the object, this will append
|
|
* `data` to the object's contents and move the writing position so that
|
|
* it's still at the end.
|
|
*
|
|
* If the writing position was not at the end, writing will overwrite the
|
|
* prior data, but it will not remove data that follows the part where you
|
|
* wrote your new data.
|
|
*
|
|
* @warning This is a big difference from writing to a file. You can
|
|
* overwrite some data in a large object, but this does not truncate the
|
|
* data that was already there. For example, if the object contained binary
|
|
* data "abc", and you write "12" at the starting position, the object will
|
|
* contain "12c".
|
|
*
|
|
* @warning The underlying protocol only supports writes up to 2 GB at a
|
|
* time. If you need to write more, try making repeated calls to
|
|
* @ref append_from_buf.
|
|
*/
|
|
template<binary DATA> void write(DATA const &data)
|
|
{
|
|
raw_write(std::data(data), std::size(data));
|
|
}
|
|
#else
|
|
/// Write `data` large object, at the current position.
|
|
/** If the writing position is at the end of the object, this will append
|
|
* `data` to the object's contents and move the writing position so that
|
|
* it's still at the end.
|
|
*
|
|
* If the writing position was not at the end, writing will overwrite the
|
|
* prior data, but it will not remove data that follows the part where you
|
|
* wrote your new data.
|
|
*
|
|
* @warning This is a big difference from writing to a file. You can
|
|
* overwrite some data in a large object, but this does not truncate the
|
|
* data that was already there. For example, if the object contained binary
|
|
* data "abc", and you write "12" at the starting position, the object will
|
|
* contain "12c".
|
|
*
|
|
* @warning The underlying protocol only supports writes up to 2 GB at a
|
|
* time. If you need to write more, try making repeated calls to
|
|
* @ref append_from_buf.
|
|
*/
|
|
template<typename DATA> void write(DATA const &data)
|
|
{
|
|
raw_write(std::data(data), std::size(data));
|
|
}
|
|
#endif
|
|
|
|
/// Resize large object to `size` bytes.
|
|
/** If the blob is more than `size` bytes long, this removes the end so as
|
|
* to make the blob the desired length.
|
|
*
|
|
* If the blob is less than `size` bytes long, it adds enough zero bytes to
|
|
* make it the desired length.
|
|
*/
|
|
void resize(std::int64_t size);
|
|
|
|
/// Return the current reading/writing position in the large object.
|
|
[[nodiscard]] std::int64_t tell() const;
|
|
|
|
/// Set the current reading/writing position to an absolute offset.
|
|
/** Returns the new file offset. */
|
|
std::int64_t seek_abs(std::int64_t offset = 0);
|
|
/// Move the current reading/writing position forwards by an offset.
|
|
/** To move backwards, pass a negative offset.
|
|
*
|
|
* Returns the new file offset.
|
|
*/
|
|
std::int64_t seek_rel(std::int64_t offset = 0);
|
|
/// Set the current position to an offset relative to the end of the blob.
|
|
/** You'll probably want an offset of zero or less.
|
|
*
|
|
* Returns the new file offset.
|
|
*/
|
|
std::int64_t seek_end(std::int64_t offset = 0);
|
|
|
|
/// Create a binary large object containing given `data`.
|
|
/** You may optionally specify an oid for the new object. If you do, and an
|
|
* object with that oid already exists, creation will fail.
|
|
*/
|
|
static oid from_buf(
|
|
dbtransaction &tx, std::basic_string_view<std::byte> data, oid id = 0);
|
|
|
|
/// Append `data` to binary large object.
|
|
/** The underlying protocol only supports appending blocks up to 2 GB.
|
|
*/
|
|
static void append_from_buf(
|
|
dbtransaction &tx, std::basic_string_view<std::byte> data, oid id);
|
|
|
|
/// Read client-side file and store it server-side as a binary large object.
|
|
[[nodiscard]] static oid from_file(dbtransaction &, char const path[]);
|
|
|
|
#if defined(PQXX_HAVE_PATH) && !defined(_WIN32)
|
|
/// Read client-side file and store it server-side as a binary large object.
|
|
/** This overload is not available on Windows, where `std::filesystem::path`
|
|
* converts to a `wchar_t` string rather than a `char` string.
|
|
*/
|
|
[[nodiscard]] static oid
|
|
from_file(dbtransaction &tx, std::filesystem::path const &path)
|
|
{
|
|
return from_file(tx, path.c_str());
|
|
}
|
|
#endif
|
|
|
|
/// Read client-side file and store it server-side as a binary large object.
|
|
/** In this version, you specify the binary large object's oid. If that oid
|
|
* is already in use, the operation will fail.
|
|
*/
|
|
static oid from_file(dbtransaction &, char const path[], oid);
|
|
|
|
#if defined(PQXX_HAVE_PATH) && !defined(_WIN32)
|
|
/// Read client-side file and store it server-side as a binary large object.
|
|
/** In this version, you specify the binary large object's oid. If that oid
|
|
* is already in use, the operation will fail.
|
|
*
|
|
* This overload is not available on Windows, where `std::filesystem::path`
|
|
* converts to a `wchar_t` string rather than a `char` string.
|
|
*/
|
|
static oid
|
|
from_file(dbtransaction &tx, std::filesystem::path const &path, oid id)
|
|
{
|
|
return from_file(tx, path.c_str(), id);
|
|
}
|
|
#endif
|
|
|
|
/// Convenience function: Read up to `max_size` bytes from blob with `id`.
|
|
/** You could easily do this yourself using the @ref open_r and @ref read
|
|
* functions, but it can save you a bit of code to do it this way.
|
|
*/
|
|
static void to_buf(
|
|
dbtransaction &, oid, std::basic_string<std::byte> &,
|
|
std::size_t max_size);
|
|
|
|
/// Read part of the binary large object with `id`, and append it to `buf`.
|
|
/** Use this to break up a large read from one binary large object into one
|
|
* massive buffer. Just keep calling this function until it returns zero.
|
|
*
|
|
* The `offset` is how far into the large object your desired chunk is, and
|
|
* `append_max` says how much to try and read in one go.
|
|
*/
|
|
static std::size_t append_to_buf(
|
|
dbtransaction &tx, oid id, std::int64_t offset,
|
|
std::basic_string<std::byte> &buf, std::size_t append_max);
|
|
|
|
/// Write a binary large object's contents to a client-side file.
|
|
static void to_file(dbtransaction &, oid, char const path[]);
|
|
|
|
#if defined(PQXX_HAVE_PATH) && !defined(_WIN32)
|
|
/// Write a binary large object's contents to a client-side file.
|
|
/** This overload is not available on Windows, where `std::filesystem::path`
|
|
* converts to a `wchar_t` string rather than a `char` string.
|
|
*/
|
|
static void
|
|
to_file(dbtransaction &tx, oid id, std::filesystem::path const &path)
|
|
{
|
|
to_file(tx, id, path.c_str());
|
|
}
|
|
#endif
|
|
|
|
/// Close this blob.
|
|
/** This does not delete the blob from the database; it only terminates your
|
|
* local object for accessing the blob.
|
|
*
|
|
* Resets the blob to a useless state similar to one that was
|
|
* default-constructed.
|
|
*
|
|
* The destructor will do this for you automatically. Still, there is a
|
|
* reason to `close()` objects explicitly where possible: if an error should
|
|
* occur while closing, `close()` can throw an exception. A destructor
|
|
* cannot.
|
|
*/
|
|
void close();
|
|
|
|
private:
|
|
PQXX_PRIVATE blob(connection &conn, int fd) noexcept :
|
|
m_conn{&conn}, m_fd{fd}
|
|
{}
|
|
static PQXX_PRIVATE blob open_internal(dbtransaction &, oid, int);
|
|
static PQXX_PRIVATE pqxx::internal::pq::PGconn *
|
|
raw_conn(pqxx::connection *) noexcept;
|
|
static PQXX_PRIVATE pqxx::internal::pq::PGconn *
|
|
raw_conn(pqxx::dbtransaction const &) noexcept;
|
|
static PQXX_PRIVATE std::string errmsg(connection const *);
|
|
static PQXX_PRIVATE std::string errmsg(dbtransaction const &tx)
|
|
{
|
|
return errmsg(&tx.conn());
|
|
}
|
|
PQXX_PRIVATE std::string errmsg() const { return errmsg(m_conn); }
|
|
PQXX_PRIVATE std::int64_t seek(std::int64_t offset, int whence);
|
|
std::size_t raw_read(std::byte buf[], std::size_t size);
|
|
void raw_write(std::byte const buf[], std::size_t size);
|
|
|
|
connection *m_conn = nullptr;
|
|
int m_fd = -1;
|
|
};
|
|
} // namespace pqxx
|
|
#endif
|