Compare commits

..

46 commits

Author SHA1 Message Date
Fedora Release Engineering
cd5135cf3a Rebuilt for https://fedoraproject.org/wiki/Fedora_44_Mass_Rebuild 2026-01-16 02:25:37 +00:00
Viktor Ashirov
26f66fc9f3 Fix broken FreeIPA upgrade and replication issues
- Resolves: rhbz#2424132 Upgrade from freeipa-4.12.5-3 and 390-ds-base-3.1.3-10 to latest rawhide fails
2026-01-12 12:18:28 +01:00
Viktor Ashirov
e3abfab8ae Correct License: string for robdb-libs 2026-01-09 19:43:17 +01:00
Viktor Ashirov
6a9fe72ef8 Fix broken FreeIPA upgrade and replication issues
- Resolves: rhbz#2424132 Upgrade from freeipa-4.12.5-3 and 390-ds-base-3.1.3-10 to latest rawhide fails
- Fix jemalloc compilation issue with GCC 15
- Issue 7096 - During replication online total init the function idl_id_is_in_idlist is not scaling with large database
- Issue 7118 - Revise paged result search locking
- Issue 7108 - Fix shutdown crash in entry cache destruction

Use correct tarball from upstream.
2026-01-09 18:48:11 +01:00
Yaakov Selkowitz
0e19d9ccb9 Restore libdb in sources 2025-12-24 00:59:03 -05:00
Mark Reynolds
c134ea11db Issue 7147 - entrycache_eviction_test is failing (#7148)
Issue 1793 - RFE - Dynamic lists - UI and CLI updates
Issue 7119 - Fix DNA shared config replication test (#7143)
Issue 7081 - Repl Log Analysis - Implement data sampling with performance and timezone fixes (#7086)
Issue 1793 - RFE - Implement dynamic lists
Issue 7112 - dsctrl dblib bdb2mdb core dumps and won't allow conversion (#7144)
Issue 7053 - Remove memberof_del_dn_from_groups from MemberOf plugin (#7064)
Issue 7138 - test_cleanallruv_repl does not restart supplier3 (#7139)
Issue 6753 - Port ticket47921 test to indirect_cos_test using DSLdapObject (#7134)
Issue 7128 - memory corruption in alias entry plugin (#7131)
Issue 7091 - Duplicate local password policy entries listed (#7092)
Issue 7124 - BDB cursor race condition with transaction isolation (#7125)
Issue 6951 - Dynamic Certificate refresh phase 1 - Search support (#7117)
Issue 7132 - Keep alive entry updated too soon after an offline import (#7133)
Issue 7135 - Not enough space for tests on GH runner (#7136)
Issue 7121 - LeakSanitizer: various leaks during replication (#7122)
Issue 7115 - LeakSanitizer: leak in `slapd_bind_local_user()` (#7116)
Issue 7109 - AddressSanitizer: SEGV ldap/servers/slapd/csnset.c:302 in csnset_dup (#7114)
Issue 7119 - Harden DNA plugin locking for shared server list operations (#7120)
Issue 7084 - UI - schema - sorting attributes breaks expanded row
Issue 6753 - Port ticket47910 test to logconv_test using DSLdapObject (#7098)
Issue 6753 - Port ticket47920 test to ldap_controls_test using DSLdapObject (#7103)
Issue 7007 - Improve paged result search locking
Issue 7041 - Add WebUI test for group member management (#7111)
Issue 3555 - UI - Fix audit issue with npm - glob (#7107)
Issue 7089 - Fix dsconf certificate list (#7090)
Issue 7076, 6992, 6784, 6214 - Fix CI test failures (#7077)
Bump js-yaml from 4.1.0 to 4.1.1 in /src/cockpit/389-console (#7097)
Issue 7069 - Fix error reporting in HAProxy trusted IP parsing (#7094)
Issue 7049 - RetroCL plugin generates invalid LDIF
Issue 7055 - Online initialization of consumers fails with error -23 (#7075)
Issue 6753 - Remove ticket 47900 test (#7087)
Issue 6753 - Port ticket 49008 test (#7080)
Issue 7042 - Enable global_backend_lock when memberofallbackend is enabled (#7043)
Issue 7078 - audit json logging does not encode binary values
Issue 7069 - Add Subnet/CIDR Support for HAProxy Trusted IPs (#7070)
Issue 7056 - DSBLE0007 doesn't generate remediation steps for missing indexes
Issue 6660 - CLI, UI - Improve replication log analyzer usability (#7062)
Issue 7065 - A search filter containing a non normalized DN assertion does not return matching entries (#7068)
Issue 7071 - search filter (&(cn:dn:=groups)) no longer returns results
Issue 7073 - Add NDN cache size configuration and enforcement tests (#7074)
Issue 6753 - Removing ticket 47871 test and porting to DSLdapObject (#7045)
Issue 7041 - CLI/UI - memberOf - no way to add/remove specific group filters
Issue 6753 - Port ticket 48228 test (#7067)
Issue 7029 - Add test case to measure ndn cache performance impact (#7030)
Issue 7061 - CLI/UI - Improve error messages for dsconf localpwp list
Issue 7059 - UI - unable to upload pem file
Issue 7032 - The new ipahealthcheck test ipahealthcheck.ds.backends.BackendsCheck raises CRITICAL issue (#7036)
Issue 7047 - MemberOf plugin logs null attribute name on fixup task completion (#7048)
Issue 7044 - RFE - index sudoHost by default (#7046)
Issue 6846 - Attribute uniqueness is not enforced with modrdn (#7026)
Issue 6784 - Support of Entry cache pinned entries (#6785)
Issue 6979 - Improve the way to detect asynchronous operations in the access logs (#6980)
Issue 6753 - Port ticket 47931 test (#7038)
Issue 7035 - RFE - memberOf - adding scoping for specific groups
CLI/UI - Add option to delete all replication conflict entries
Issue 7033 - lib389 -  basic plugin status not in JSON
Issue 7023 - UI - if first instance that is loaded is stopped it breaks parts of the UI
Issue 6753 - Removing ticket 47714 test and porting to DSLdapObject (#6946)
Issue 7027 - 389-ds-base OpenScanHub Leaks Detected (#7028)
Issue 6753 - Removing ticket 47676 test and porting to DSLdapObject (#6938)
Issue 6966 - On large DB, unlimited IDL scan limit reduce the SRCH performance (#6967)
Issue 6660 - UI - Improve replication log analysis charts and usability (#6968)
Issue 6753 - Removing ticket 47653MMR test and porting to DSLdapObject (#6926)
Issue 7021 - Units for changing MDB max size are not consistent across different tools (#7022)
Issue 6753 - Removing ticket 49463 test and porting to DSLdapObject (#6899)
Issue 6954 - do not delete referrals on chain_on_update backend
Issue 6982 - UI - MemberOf shared config does not validate DN properly (#6983)
Issue 6740 - Fix FIPS mode test failures in syncrepl, mapping tree, and resource limits (#6993)
Issue 7018 - BUG - prevent stack depth being hit (#7019)
Issue 7014 - memberOf - ignored deferred updates with LMDB
Issue 7002 - restore is failing. (#7003)
Issue 6758 - Fix WebUI monitoring test failure due to FormSelect component deprecation (#7004)
Issue 6753 - Removing ticket 47869 test and porting to DSLdapObject (#7001)
Issue 6753 - Port ticket 49073 test (#7005)
Issue 7016 - fix NULL deref in send_referrals_from_entry() (#7017)
Issue 6753 - Port ticket 47815 test (#7000)
Issue 7010 - Fix certdir underflow in slapd_nss_init() (#7011)
Issue 7012 - improve dscrl dbverify result when backend does not exists (#7013)
Issue 6753 - Removing ticket 477828 test and porting to DSLdapObject (#6989)
Issue 6753 - Removing ticket 47721 test and porting to DSLdapObject (#6973)
Issue 6992 - Improve handling of mismatched ldif import (#6999)
Issue 6997 - Logic error in get_bdb_impl_status prevents bdb2mdb execution (#6998)
Issue 6810 - Deprecate PAM PTA plugin configuration attributes in base entry - fix memleak (#6988)
Issue 6971 - bundle-rust-npm.py: TypeError: argument of type 'NoneType' is not iterable (#6972)
Fix overflow in certmap filter/DN buffers (#6995)
Issue 6753 - Port ticket 49386 test (#6987)
Issue 6753 - Removing ticket 47787 test and porting to DSLdapObject (#6976)
Issue 6753 - Port ticket 49072 test (#6984)
Issue 6990 - UI - Replace deprecated Select components with new TypeaheadSelect (#6996)
Issue 6990 - UI - Fix typeahead Select fields losing values on Enter keypress (#6991)
Issue 6887 - Enhance logconv.py to add support for JSON access logs (#6889)
Issue 6985 - Some logconv CI tests fail with BDB (#6986)
Issue 6891 - JSON logging - add wrapper function that checks for NULL
Issue 4835 - dsconf display an incomplete help with changelog setting (#6769)
Issue 6753 - Port ticket 47963 & 49184 tests (#6970)
Issue 6753 - Port ticket 47829 & 47833 tests
Issue 6977 - UI - Show error message when trying to use unavailable ports (#6978)
Issue 6956 - More UI fixes
Issue 6626 - Fix version
Issue 6900 - Rename test files for proper pytest discovery (#6909)
Issue 6947 - Revise time skew check in healthcheck tool and add option to exclude checks
Issue 6805 - RFE - Multiple backend entry cache tuning
Issue 6753 - Port and fix ticket 47823 tests
Issue 6843 - Add CI tests for logconv.py (#6856)
Issue 6933 - When deferred memberof update is enabled after the server crashed it should not launch memberof fixup task by default (#6935)
UI - update Radio handlers and LDAP entries last modified time
Issue 6810 - Deprecate PAM PTA plugin configuration attributes in base entry (#6832)
Issue 6660 - UI - Fix minor typo (#6955)
Issue 6753 - Port ticket 47808 test
Issue 6910 - Fix latest coverity issues
Issue 6753 - Removing ticket 50232 test and porting to DSLdapObject (#6861)
Issue 6919 - numSubordinates/tombstoneNumSubordinates are inconsisten… (#6920)
Issue 6430 - Fix build with bundled libdb
Issue 6342 - buffer owerflow in the function parseVariant (#6927)
Issue 6940 - dsconf monitor server fails with ldapi:// due to absent server ID (#6941)
Issue 6936 - Make user/subtree policy creation idempotent (#6937)
Migrate from PR_Poll to epoll and timerfd. (#6924)
Issue 6928 - The parentId attribute is indexed with improper matching rule
Issue 6753 - Removing ticket 49540 test and porting to DSLdapObject (#6877)
Issue 6904 - Fix config_test.py::test_lmdb_config
Issue 5120 - Fix compilation error
Issue 6929 - Compilation failure with rust-1.89 on Fedora ELN
Issue 6922 - AddressSanitizer: leaks found by acl test suite
Issue 6519 - Add basic dsidm account tests
Issue 6753 - Port ticket test 47573
Issue 6875 - Fix dsidm tests
Issues 6913, 6886, 6250 - Adjust xfail marks (#6914)
Issue 6768 - ns-slapd crashes when a referral is added (#6780)
Issue 6468 - CLI - Fix default error log level
Issue 6181 - RFE - Allow system to manage uid/gid at startup
Issue 6901 - Update changelog trimming logging - fix tests
Issue 6778 - Memory leak in roles_cache_create_object_from_entry part 2
Issue 6897 - Fix disk monitoring test failures and improve test maintainability (#6898)
Issue 6884 - Mask password hashes in audit logs (#6885)
Issue 6594 - Add test for numSubordinates replication consistency with tombstones (#6862)
Issue 6250 - Add test for entryUSN overflow on failed add operations (#6821)
Issue 6895 - Crash if repl keep alive entry can not be created
Issue 6663 - Fix NULL subsystem crash in JSON error logging (#6883)
Issue 6430 - implement read-only bdb (#6431)
Issue 6901 - Update changelog trimming logging
Issue 6880 - Fix ds_logs test suite failure
Issue 6352 - Fix DeprecationWarning
Issue 6800 - Rerun the check in verbose mode on failure
Issue 6893 - Log user that is updated during password modify extended operation
Issue 6772 - dsconf - Replicas with the "consumer" role allow for viewing and modification of their changelog. (#6773)
Issue 6829 - Update parametrized docstring for tests
Issue 6888 - Missing access JSON logging for TLS/Client auth
Issue 6878 - Prevent repeated disconnect logs during shutdown (#6879)
Issue 6872 - compressed log rotation creates files with world readable permission
Issue 6859 - str2filter is not fully applying matching rules
Issue 5733 - Remove outdated Dockerfiles
Issue 6800 - Check for minimal supported Python version
Issue 6868 - UI - schema attribute table expansion break after moving to a new page
Issue 6865 - AddressSanitizer: leak in agmt_update_init_status
Issue 6848 - AddressSanitizer: leak in do_search
Issue 6850 - AddressSanitizer: memory leak in mdb_init
Issue 6854 - Refactor for improved data management (#6855)
Issue 6756 - CLI, UI - Properly handle disabled NDN cache (#6757)
Issue 6857 - uiduniq: allow specifying match rules in the filter
Issue 6852 - Move ds* CLI tools back to /sbin
Issue 6753 - Port ticket tests 48294 & 48295
Issue 6753 - Add 'add_exclude_subtree' and 'remove_exclude_subtree' methods to Attribute uniqueness plugin
Issue 6841 - Cancel Actions when PR is updated
Issue 6838 - lib389/replica.py is using nonexistent datetime.UTC in Python 3.9
Issue 6822 - Backend creation cleanup and Database UI tab error handling (#6823)
Issue 6782 - Improve paged result locking
Issue 6829 - Update parametrized docstring for tests
2025-12-16 15:24:48 -05:00
Python Maint
0fbd2343c9 Rebuilt for Python 3.14.0rc3 bytecode 2025-09-19 12:06:10 +02:00
Yaakov Selkowitz
8a64e863ed Fix build --with-bundle-libdb, enable for ELN
While the goal is to ship no BDB backend in RHEL 11, this currently cannot
be built without one.  As such, building with a bundled libdb and then
dropping the -bdb subpackage from ELN CRB gets us as close as possible to
that state for now.

https://github.com/389ds/389-ds-base/issues/6944
https://github.com/389ds/389-ds-base/pull/6945
2025-08-20 17:44:50 -04:00
Python Maint
a4238f09da Rebuilt for Python 3.14.0rc2 bytecode 2025-08-15 15:24:07 +02:00
Viktor Ashirov
f4f2109f94 Rebuild for https://fedoraproject.org/wiki/Changes/389_Directory_Server_3.2.0 2025-08-11 13:38:41 +02:00
František Zatloukal
11a2e08521 Rebuilt for icu 77.1 2025-08-06 09:52:20 +02:00
Fedora Release Engineering
ad5313fd73 Rebuilt for https://fedoraproject.org/wiki/Fedora_43_Mass_Rebuild 2025-07-23 15:39:11 +00:00
Yaakov Selkowitz
c01eeaf809 Fix checksum of libdb sources
[skip changelog]
2025-07-09 12:48:07 -04:00
Viktor Ashirov
53a2fcc07b Remove old patches
[skip changelog]
2025-07-08 09:46:38 +02:00
Viktor Ashirov
a800aa34d1 Update sources file 2025-06-30 11:57:30 +02:00
Viktor Ashirov
5ed75f148c Update to 3.1.3 2025-06-30 11:49:06 +02:00
Viktor Ashirov
9c09966b8b Resolves: Issue 6776 - Enabling audit log makes slapd coredump 2025-06-16 14:20:58 +02:00
Python Maint
7ab459a547 Rebuilt for Python 3.14 2025-06-03 13:57:04 +02:00
Miro Hrončok
250f86dedd python3-lib389: Remove manually specified runtime Requires
The requires are automatically generated
and their manual repetition is forbidden by packaging guidelines.

> Automatically determined dependencies MUST NOT be duplicated by manual dependencies.

https://docs.fedoraproject.org/en-US/packaging-guidelines/#_package_dependencies

Even further:

> Packages SHOULD NOT have an explicit runtime dependency on python3.

https://docs.fedoraproject.org/en-US/packaging-guidelines/Python/#_dependencies

I kept the Requires for python3-libselinux as it is not generated.
I have not checked whether or not this dependency is actually required.

---

My main motivation is this: https://github.com/389ds/389-ds-base/pull/6719

Once released upstream,
the runtime dependency on python3-setuptools will be redundant.
By deleting it now,
I reduce the risk of having it in the spec even after the upstream change lands.
2025-04-02 12:18:27 +02:00
Yaakov Selkowitz
07e8a882bc Fix build of bundled libdb with GCC 15
GCC 15 defaults to C23.  libdb is obsolete and the last release was over a
decade ago, and therefore cannot be expected to compile to the latest
standards.
2025-03-05 08:14:45 -05:00
Viktor Ashirov
1e3c90db19 Update to 3.1.2-5
- Resolves: Issue 6489 - After log rotation refresh the FD pointer
- Resolves: Issue 6554 - During import of entries without nsUniqueId, a supplier generates duplicate nsUniqueId (LMDB only)
- Resolves: Issue 6555 - Potential crash when deleting a replicated backend
2025-02-14 14:43:03 +01:00
Zbigniew Jędrzejewski-Szmek
6343e0d443 Drop call to %sysusers_create_compat
After https://fedoraproject.org/wiki/Changes/RPMSuportForSystemdSysusers,
rpm will handle this automatically.
2025-02-08 16:34:03 +01:00
Björn Esser
78d0143425
Add explicit BR: libxcrypt-devel
Signed-off-by: Björn Esser <besser82@fedoraproject.org>
2025-02-01 19:52:34 +01:00
Viktor Ashirov
ca8f74ff67 Replace python3-magic with python3-file-magic 2025-01-25 17:22:27 +01:00
Viktor Ashirov
c0a85dde2d Update to 3.1.2 2025-01-24 15:00:02 +01:00
Fedora Release Engineering
16dbfc769e Rebuilt for https://fedoraproject.org/wiki/Fedora_42_Mass_Rebuild 2025-01-20 07:16:14 +00:00
Fedora Release Engineering
409e247670 Rebuilt for https://fedoraproject.org/wiki/Fedora_42_Mass_Rebuild 2025-01-16 08:19:55 +00:00
Pete Walter
cc3b929de3 Rebuild for ICU 76 2024-12-08 22:02:34 +00:00
Viktor Ashirov
fc8a2a3a6f Fix source URL 2024-10-30 11:46:51 +01:00
Miroslav Suchý
0082bb49b0 Add exception to GPL license in license tag
See https://gitlab.com/fedora/legal/fedora-license-data/-/issues/517
2024-10-29 22:25:15 +00:00
Viktor Ashirov
6ee4494a7a Resolves: VLV errors in Fedora 40 with RSNv3 and pruning enabled (rhbz#2317851) 2024-10-15 13:28:42 +02:00
Viktor Ashirov
9d411e6f98 Replace lmdb with lmdb-libs in Requires 2024-07-30 13:51:33 +02:00
Viktor Ashirov
315706ba43 Update to 3.1.1
...
- Resolves: CVE-2024-1062 (rhbz#2261884)
- Resolves: CVE-2024-2199 (rhbz#2283632)
- Resolves: CVE-2024-3657 (rhbz#2283631)
- Resolves: CVE-2024-5953 (rhbz#2292109)
2024-07-30 11:58:16 +02:00
Fedora Release Engineering
1cf86bb6f0 Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild 2024-07-17 14:39:40 +00:00
Yaakov Selkowitz
0dd17702a8 Use bundled libdb on RHEL and ELN
Based on c10s:
8dfbbc21d0
2024-06-25 15:56:38 -04:00
Yaakov Selkowitz
46083690af Fix bundled libdb build with RPM 4.20
https://github.com/389ds/389-ds-base/pull/6235
2024-06-25 01:05:46 -04:00
Viktor Ashirov
291e311dd3 Drop pytest dependency 2024-06-17 09:58:18 +02:00
Viktor Ashirov
2303613243 Drop obsolete perl MODULE_COMPAT requirement 2024-06-13 16:51:14 +02:00
Miro Hrončok
60a22a6d73 Stop Providing libjemalloc.so.2()(64bit)
This was removed in eeb80bdccc
and it made 389-ds-base-libs and 389-ds-base Provides libjemalloc.so.2()(64bit).

Package Requiring it would install 389-ds-base-libs instead of jemalloc,
failing to execute with:

    $ nsupdate
    nsupdate: error while loading shared libraries: libjemalloc.so.2:
    cannot open shared object file: No such file or directory
2024-06-10 07:42:51 +02:00
Python Maint
27db1de21e Rebuilt for Python 3.13 2024-06-07 22:44:48 +02:00
Viktor Ashirov
ebbd5286c6 Exclude i686 architecture
[skip changelog]
2024-06-01 19:14:48 +02:00
Viktor Ashirov
eeb80bdccc Sync spec file with the upstream spec file 2024-05-31 22:05:00 +02:00
Viktor Ashirov
b308bcac8d Convert to %autorelease and %autochangelog
[skip changelog]
2024-05-31 21:47:30 +02:00
James Chapman
e6aa402601 Bump version to 3.1.0
Issue 6142 - Fix CI tests (#6161)
Issue 6157 - Cockipt crashes when getting replication status if topology contains an old 389ds version (#6158)
Issue 5105 - lmdb - Cannot create entries with long rdn - fix covscan (#6131)
Issue 6086 - Ambiguous warning about SELinux in dscreate for non-root user
Issue 6094 - Add coverity scan workflow
Issue 5962 - Rearrange includes for 32-bit support logic
Issue 6046 - Make dscreate to work during kickstart installations
Issue 6073 - Improve error log when running out of memory (#6084)
Issue 6071 - Instance creation/removal is slow
Issue 6010 - 389 ds ignores nsslapd-maxdescriptors (#6027)
Issue 6075 - Ignore build artifacts (#6076)
Issue 6068 - Add dscontainer stop function
2024-05-15 12:25:05 +01:00
Viktor Ashirov
51c0d48236 Drop unused patch 2024-04-24 14:25:10 +02:00
Viktor Ashirov
d4e20edc57 Enable CI gating 2024-04-24 14:23:54 +02:00
18 changed files with 1800 additions and 1770 deletions

View file

@ -1,53 +0,0 @@
From fc7f5aa01e245c7c2e35b01d171dbd5a6dc75db4 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Sat, 25 Jan 2025 13:54:33 +0100
Subject: [PATCH] Issue 6544 - logconv.py: python3-magic conflicts with
python3-file-magic
Bug Description:
python3-magic and python3-file-magic can't be installed simultaneously,
python3-magic is not packaged for EL10.
Fix Description:
Use python3-file-magic instead.
Issue identified and fix suggested by Adam Williamson.
Fixes: https://github.com/389ds/389-ds-base/issues/6544
Reviewed by: @mreynolds389 (Thanks!)
---
ldap/admin/src/logconv.py | 3 +--
rpm/389-ds-base.spec.in | 2 +-
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/ldap/admin/src/logconv.py b/ldap/admin/src/logconv.py
index 566f9af38..2fb5bb8c1 100755
--- a/ldap/admin/src/logconv.py
+++ b/ldap/admin/src/logconv.py
@@ -1798,8 +1798,7 @@ class logAnalyser:
return None
try:
- mime = magic.Magic(mime=True)
- filetype = mime.from_file(filepath)
+ filetype = magic.detect_from_filename(filepath).mime_type
# List of supported compression types
compressed_mime_types = [
diff --git a/rpm/389-ds-base.spec.in b/rpm/389-ds-base.spec.in
index 3146b9186..3c6e95938 100644
--- a/rpm/389-ds-base.spec.in
+++ b/rpm/389-ds-base.spec.in
@@ -298,7 +298,7 @@ Requires: json-c
# Log compression
Requires: zlib-devel
# logconv.py, MIME type
-Requires: python-magic
+Requires: python3-file-magic
# Picks up our systemd deps.
%{?systemd_requires}
--
2.48.0

View file

@ -0,0 +1,318 @@
From 1c9c535888b9a850095794787d67900b04924a76 Mon Sep 17 00:00:00 2001
From: tbordaz <tbordaz@redhat.com>
Date: Wed, 7 Jan 2026 11:21:12 +0100
Subject: [PATCH] Issue 7096 - During replication online total init the
function idl_id_is_in_idlist is not scaling with large database (#7145)
Bug description:
During a online total initialization, the supplier sorts
the candidate list of entries so that the parents are sent before
children entries.
With large DB the ID array used for the sorting is not
scaling. It takes so long to build the candidate list that
the connection gets closed
Fix description:
Instead of using an ID array, uses a list of ID ranges
fixes: #7096
Reviewed by: Mark Reynolds, Pierre Rogier (Thanks !!)
---
ldap/servers/slapd/back-ldbm/back-ldbm.h | 12 ++
ldap/servers/slapd/back-ldbm/idl_common.c | 163 ++++++++++++++++++
ldap/servers/slapd/back-ldbm/idl_new.c | 30 ++--
.../servers/slapd/back-ldbm/proto-back-ldbm.h | 3 +
4 files changed, 189 insertions(+), 19 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h
index 1bc36720d..b187c26bc 100644
--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h
@@ -282,6 +282,18 @@ typedef struct _idlist_set
#define INDIRECT_BLOCK(idl) ((idl)->b_nids == INDBLOCK)
#define IDL_NIDS(idl) (idl ? (idl)->b_nids : (NIDS)0)
+/*
+ * used by the supplier during online total init
+ * it stores the ranges of ID that are already present
+ * in the candidate list ('parentid>=1')
+ */
+typedef struct IdRange {
+ ID first;
+ ID last;
+ struct IdRange *next;
+} IdRange_t;
+
+
typedef size_t idl_iterator;
/* small hashtable implementation used in the entry cache -- the table
diff --git a/ldap/servers/slapd/back-ldbm/idl_common.c b/ldap/servers/slapd/back-ldbm/idl_common.c
index fcb0ece4b..fdc9b4e67 100644
--- a/ldap/servers/slapd/back-ldbm/idl_common.c
+++ b/ldap/servers/slapd/back-ldbm/idl_common.c
@@ -172,6 +172,169 @@ idl_min(IDList *a, IDList *b)
return (a->b_nids > b->b_nids ? b : a);
}
+/*
+ * This is a faster version of idl_id_is_in_idlist.
+ * idl_id_is_in_idlist uses an array of ID so lookup is expensive
+ * idl_id_is_in_idlist_ranges uses a list of ranges of ID lookup is faster
+ * returns
+ * 1: 'id' is present in idrange_list
+ * 0: 'id' is not present in idrange_list
+ */
+int
+idl_id_is_in_idlist_ranges(IDList *idl, IdRange_t *idrange_list, ID id)
+{
+ IdRange_t *range = idrange_list;
+ int found = 0;
+
+ if (NULL == idl || NOID == id) {
+ return 0; /* not in the list */
+ }
+ if (ALLIDS(idl)) {
+ return 1; /* in the list */
+ }
+
+ for(;range; range = range->next) {
+ if (id > range->last) {
+ /* check if it belongs to the next range */
+ continue;
+ }
+ if (id >= range->first) {
+ /* It belongs to that range [first..last ] */
+ found = 1;
+ break;
+ } else {
+ /* this range is after id */
+ break;
+ }
+ }
+ return found;
+}
+
+/* This function is used during the online total initialisation
+ * (see next function)
+ * It frees all ranges of ID in the list
+ */
+void idrange_free(IdRange_t **head)
+{
+ IdRange_t *curr, *sav;
+
+ if ((head == NULL) || (*head == NULL)) {
+ return;
+ }
+ curr = *head;
+ sav = NULL;
+ for (; curr;) {
+ sav = curr;
+ curr = curr->next;
+ slapi_ch_free((void *) &sav);
+ }
+ if (sav) {
+ slapi_ch_free((void *) &sav);
+ }
+ *head = NULL;
+}
+
+/* This function is used during the online total initialisation
+ * Because a MODRDN can move entries under a parent that
+ * has a higher ID we need to sort the IDList so that parents
+ * are sent, to the consumer, before the children are sent.
+ * The sorting with a simple IDlist does not scale instead
+ * a list of IDs ranges is much faster.
+ * In that list we only ADD/lookup ID.
+ */
+IdRange_t *idrange_add_id(IdRange_t **head, ID id)
+{
+ if (head == NULL) {
+ slapi_log_err(SLAPI_LOG_ERR, "idrange_add_id",
+ "Can not add ID %d in non defined list\n", id);
+ return NULL;
+ }
+
+ if (*head == NULL) {
+ /* This is the first range */
+ IdRange_t *new_range = (IdRange_t *)slapi_ch_malloc(sizeof(IdRange_t));
+ new_range->first = id;
+ new_range->last = id;
+ new_range->next = NULL;
+ *head = new_range;
+ return *head;
+ }
+
+ IdRange_t *curr = *head, *prev = NULL;
+
+ /* First, find if id already falls within any existing range, or it is adjacent to any */
+ while (curr) {
+ if (id >= curr->first && id <= curr->last) {
+ /* inside a range, nothing to do */
+ return curr;
+ }
+
+ if (id == curr->last + 1) {
+ /* Extend this range upwards */
+ curr->last = id;
+
+ /* Check for possible merge with next range */
+ IdRange_t *next = curr->next;
+ if (next && curr->last + 1 >= next->first) {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) merge current with next range [%d..%d]\n", id, curr->first, curr->last);
+ curr->last = (next->last > curr->last) ? next->last : curr->last;
+ curr->next = next->next;
+ slapi_ch_free((void*) &next);
+ } else {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) extend forward current range [%d..%d]\n", id, curr->first, curr->last);
+ }
+ return curr;
+ }
+
+ if (id + 1 == curr->first) {
+ /* Extend this range downwards */
+ curr->first = id;
+
+ /* Check for possible merge with previous range */
+ if (prev && prev->last + 1 >= curr->first) {
+ prev->last = curr->last;
+ prev->next = curr->next;
+ slapi_ch_free((void *) &curr);
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) merge current with previous range [%d..%d]\n", id, prev->first, prev->last);
+ return prev;
+ } else {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) extend backward current range [%d..%d]\n", id, curr->first, curr->last);
+ return curr;
+ }
+ }
+
+ /* If id is before the current range, break so we can insert before */
+ if (id < curr->first) {
+ break;
+ }
+
+ prev = curr;
+ curr = curr->next;
+ }
+ /* Need to insert a new standalone IdRange */
+ IdRange_t *new_range = (IdRange_t *)slapi_ch_malloc(sizeof(IdRange_t));
+ new_range->first = id;
+ new_range->last = id;
+ new_range->next = curr;
+
+ if (prev) {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) add new range [%d..%d]\n", id, new_range->first, new_range->last);
+ prev->next = new_range;
+ } else {
+ /* Insert at head */
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) head range [%d..%d]\n", id, new_range->first, new_range->last);
+ *head = new_range;
+ }
+ return *head;
+}
+
+
int
idl_id_is_in_idlist(IDList *idl, ID id)
{
diff --git a/ldap/servers/slapd/back-ldbm/idl_new.c b/ldap/servers/slapd/back-ldbm/idl_new.c
index 5fbcaff2e..2d978353f 100644
--- a/ldap/servers/slapd/back-ldbm/idl_new.c
+++ b/ldap/servers/slapd/back-ldbm/idl_new.c
@@ -417,7 +417,6 @@ idl_new_range_fetch(
{
int ret = 0;
int ret2 = 0;
- int idl_rc = 0;
dbi_cursor_t cursor = {0};
IDList *idl = NULL;
dbi_val_t cur_key = {0};
@@ -436,6 +435,7 @@ idl_new_range_fetch(
size_t leftoverlen = 32;
size_t leftovercnt = 0;
char *index_id = get_index_name(be, db, ai);
+ IdRange_t *idrange_list = NULL;
if (NULL == flag_err) {
@@ -578,10 +578,12 @@ idl_new_range_fetch(
* found entry is the one from the suffix
*/
suffix = key;
- idl_rc = idl_append_extend(&idl, id);
- } else if ((key == suffix) || idl_id_is_in_idlist(idl, key)) {
+ idl_append_extend(&idl, id);
+ idrange_add_id(&idrange_list, id);
+ } else if ((key == suffix) || idl_id_is_in_idlist_ranges(idl, idrange_list, key)) {
/* the parent is the suffix or already in idl. */
- idl_rc = idl_append_extend(&idl, id);
+ idl_append_extend(&idl, id);
+ idrange_add_id(&idrange_list, id);
} else {
/* Otherwise, keep the {key,id} in leftover array */
if (!leftover) {
@@ -596,13 +598,7 @@ idl_new_range_fetch(
leftovercnt++;
}
} else {
- idl_rc = idl_append_extend(&idl, id);
- }
- if (idl_rc) {
- slapi_log_err(SLAPI_LOG_ERR, "idl_new_range_fetch",
- "Unable to extend id list (err=%d)\n", idl_rc);
- idl_free(&idl);
- goto error;
+ idl_append_extend(&idl, id);
}
count++;
@@ -695,21 +691,17 @@ error:
while(remaining > 0) {
for (size_t i = 0; i < leftovercnt; i++) {
- if (leftover[i].key > 0 && idl_id_is_in_idlist(idl, leftover[i].key) != 0) {
+ if (leftover[i].key > 0 && idl_id_is_in_idlist_ranges(idl, idrange_list, leftover[i].key) != 0) {
/* if the leftover key has its parent in the idl */
- idl_rc = idl_append_extend(&idl, leftover[i].id);
- if (idl_rc) {
- slapi_log_err(SLAPI_LOG_ERR, "idl_new_range_fetch",
- "Unable to extend id list (err=%d)\n", idl_rc);
- idl_free(&idl);
- return NULL;
- }
+ idl_append_extend(&idl, leftover[i].id);
+ idrange_add_id(&idrange_list, leftover[i].id);
leftover[i].key = 0;
remaining--;
}
}
}
slapi_ch_free((void **)&leftover);
+ idrange_free(&idrange_list);
}
slapi_log_err(SLAPI_LOG_FILTER, "idl_new_range_fetch",
"Found %d candidates; error code is: %d\n",
diff --git a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
index 91d61098a..30a7aa11f 100644
--- a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
@@ -217,6 +217,9 @@ ID idl_firstid(IDList *idl);
ID idl_nextid(IDList *idl, ID id);
int idl_init_private(backend *be, struct attrinfo *a);
int idl_release_private(struct attrinfo *a);
+IdRange_t *idrange_add_id(IdRange_t **head, ID id);
+void idrange_free(IdRange_t **head);
+int idl_id_is_in_idlist_ranges(IDList *idl, IdRange_t *idrange_list, ID id);
int idl_id_is_in_idlist(IDList *idl, ID id);
idl_iterator idl_iterator_init(const IDList *idl);
--
2.52.0

View file

@ -1,311 +0,0 @@
From 1aabba9b17f99eb1a460be3305aad4b7099b9fe6 Mon Sep 17 00:00:00 2001
From: progier389 <progier@redhat.com>
Date: Wed, 13 Nov 2024 15:31:35 +0100
Subject: [PATCH] Issue 6374 - nsslapd-mdb-max-dbs autotuning doesn't work
properly (#6400)
* Issue 6374 - nsslapd-mdb-max-dbs autotuning doesn't work properly
Several issues:
After restarting the server nsslapd-mdb-max-dbs may not be high enough to add a new backend
because the value computation is wrong.
dbscan fails to open the database if nsslapd-mdb-max-dbs has been increased.
dbscan crashes when closing the database (typically when using -S)
When starting the instance the nsslapd-mdb-max-dbs parameter is increased to ensure that a new backend may be added.
When dse.ldif path is not specified, the db environment is now open using the INFO.mdb data instead of using the default values.
synchronization between thread closure and database context destruction is hardened
Issue: #6374
Reviewed by: @tbordaz , @vashirov (Thanks!)
(cherry picked from commit 56cd3389da608a3f6eeee58d20dffbcd286a8033)
---
.../tests/suites/config/config_test.py | 86 +++++++++++++++++++
ldap/servers/slapd/back-ldbm/back-ldbm.h | 2 +
.../slapd/back-ldbm/db-mdb/mdb_config.c | 17 ++--
.../back-ldbm/db-mdb/mdb_import_threads.c | 9 +-
.../slapd/back-ldbm/db-mdb/mdb_instance.c | 8 ++
ldap/servers/slapd/back-ldbm/dbimpl.c | 2 +-
ldap/servers/slapd/back-ldbm/import.c | 14 ++-
7 files changed, 128 insertions(+), 10 deletions(-)
diff --git a/dirsrvtests/tests/suites/config/config_test.py b/dirsrvtests/tests/suites/config/config_test.py
index c3e26eed4..08544594f 100644
--- a/dirsrvtests/tests/suites/config/config_test.py
+++ b/dirsrvtests/tests/suites/config/config_test.py
@@ -17,6 +17,7 @@ from lib389.topologies import topology_m2, topology_st as topo
from lib389.utils import *
from lib389._constants import DN_CONFIG, DEFAULT_SUFFIX, DEFAULT_BENAME
from lib389._mapped_object import DSLdapObjects
+from lib389.agreement import Agreements
from lib389.cli_base import FakeArgs
from lib389.cli_conf.backend import db_config_set
from lib389.idm.user import UserAccounts, TEST_USER_PROPERTIES
@@ -27,6 +28,8 @@ from lib389.cos import CosPointerDefinitions, CosTemplates
from lib389.backend import Backends, DatabaseConfig
from lib389.monitor import MonitorLDBM, Monitor
from lib389.plugins import ReferentialIntegrityPlugin
+from lib389.replica import BootstrapReplicationManager, Replicas
+from lib389.passwd import password_generate
pytestmark = pytest.mark.tier0
@@ -36,6 +39,8 @@ PSTACK_CMD = '/usr/bin/pstack'
logging.getLogger(__name__).setLevel(logging.INFO)
log = logging.getLogger(__name__)
+DEBUGGING = os.getenv("DEBUGGING", default=False)
+
@pytest.fixture(scope="module")
def big_file():
TEMP_BIG_FILE = ''
@@ -813,6 +818,87 @@ def test_numlisteners_limit(topo):
assert numlisteners[0] == '4'
+def bootstrap_replication(inst_from, inst_to, creds):
+ manager = BootstrapReplicationManager(inst_to)
+ rdn_val = 'replication manager'
+ if manager.exists():
+ manager.delete()
+ manager.create(properties={
+ 'cn': rdn_val,
+ 'uid': rdn_val,
+ 'userPassword': creds
+ })
+ for replica in Replicas(inst_to).list():
+ replica.remove_all('nsDS5ReplicaBindDNGroup')
+ replica.replace('nsDS5ReplicaBindDN', manager.dn)
+ for agmt in Agreements(inst_from).list():
+ agmt.replace('nsDS5ReplicaBindDN', manager.dn)
+ agmt.replace('nsDS5ReplicaCredentials', creds)
+
+
+@pytest.mark.skipif(get_default_db_lib() != "mdb", reason="This test requires lmdb")
+def test_lmdb_autotuned_maxdbs(topology_m2, request):
+ """Verify that after restart, nsslapd-mdb-max-dbs is large enough to add a new backend.
+
+ :id: 0272d432-9080-11ef-8f40-482ae39447e5
+ :setup: Two suppliers configuration
+ :steps:
+ 1. loop 20 times
+ 3. In 1 loop: restart instance
+ 3. In 1 loop: add a new backend
+ 4. In 1 loop: check that instance is still alive
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ """
+
+ s1 = topology_m2.ms["supplier1"]
+ s2 = topology_m2.ms["supplier2"]
+
+ backends = Backends(s1)
+ db_config = DatabaseConfig(s1)
+ # Generate the teardown finalizer
+ belist = []
+ creds=password_generate()
+ bootstrap_replication(s2, s1, creds)
+ bootstrap_replication(s1, s2, creds)
+
+ def fin():
+ s1.start()
+ for be in belist:
+ be.delete()
+
+ if not DEBUGGING:
+ request.addfinalizer(fin)
+
+ # 1. Set autotuning (off-line to be able to decrease the value)
+ s1.stop()
+ dse_ldif = DSEldif(s1)
+ dse_ldif.replace(db_config.dn, 'nsslapd-mdb-max-dbs', '0')
+ os.remove(f'{s1.dbdir}/data.mdb')
+ s1.start()
+
+ # 2. Reinitialize the db:
+ log.info("Bulk import...")
+ agmt = Agreements(s2).list()[0]
+ agmt.begin_reinit()
+ (done, error) = agmt.wait_reinit()
+ log.info(f'Bulk importresult is ({done}, {error})')
+ assert done is True
+ assert error is False
+
+ # 3. loop 20 times
+ for idx in range(20):
+ s1.restart()
+ log.info(f'Adding backend test{idx}')
+ belist.append(backends.create(properties={'cn': f'test{idx}',
+ 'nsslapd-suffix': f'dc=test{idx}'}))
+ assert s1.status()
+
+
+
if __name__ == '__main__':
# Run isolated
# -s for DEBUG mode
diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h
index 8fea63e35..35d0ece04 100644
--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h
@@ -896,4 +896,6 @@ typedef struct _back_search_result_set
((L)->size == (R)->size && !memcmp((L)->data, (R)->data, (L)->size))
typedef int backend_implement_init_fn(struct ldbminfo *li, config_info *config_array);
+
+pthread_mutex_t *get_import_ctx_mutex();
#endif /* _back_ldbm_h_ */
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
index 351f54037..1f7b71442 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
@@ -83,7 +83,7 @@ dbmdb_compute_limits(struct ldbminfo *li)
uint64_t total_space = 0;
uint64_t avail_space = 0;
uint64_t cur_dbsize = 0;
- int nbchangelogs = 0;
+ int nbvlvs = 0;
int nbsuffixes = 0;
int nbindexes = 0;
int nbagmt = 0;
@@ -99,8 +99,8 @@ dbmdb_compute_limits(struct ldbminfo *li)
* But some tunable may be autotuned.
*/
if (dbmdb_count_config_entries("(objectClass=nsMappingTree)", &nbsuffixes) ||
- dbmdb_count_config_entries("(objectClass=nsIndex)", &nbsuffixes) ||
- dbmdb_count_config_entries("(&(objectClass=nsds5Replica)(nsDS5Flags=1))", &nbchangelogs) ||
+ dbmdb_count_config_entries("(objectClass=nsIndex)", &nbindexes) ||
+ dbmdb_count_config_entries("(objectClass=vlvIndex)", &nbvlvs) ||
dbmdb_count_config_entries("(objectClass=nsds5replicationagreement)", &nbagmt)) {
/* error message is already logged */
return 1;
@@ -120,8 +120,15 @@ dbmdb_compute_limits(struct ldbminfo *li)
info->pagesize = sysconf(_SC_PAGE_SIZE);
limits->min_readers = config_get_threadnumber() + nbagmt + DBMDB_READERS_MARGIN;
- /* Default indexes are counted in "nbindexes" so we should always have enough resource to add 1 new suffix */
- limits->min_dbs = nbsuffixes + nbindexes + nbchangelogs + DBMDB_DBS_MARGIN;
+ /*
+ * For each suffix there are 4 databases instances:
+ * long-entryrdn, replication_changelog, id2entry and ancestorid
+ * then the indexes and the vlv and vlv cache
+ *
+ * Default indexes are counted in "nbindexes" so we should always have enough
+ * resource to add 1 new suffix
+ */
+ limits->min_dbs = 4*nbsuffixes + nbindexes + 2*nbvlvs + DBMDB_DBS_MARGIN;
total_space = ((uint64_t)(buf.f_blocks)) * ((uint64_t)(buf.f_bsize));
avail_space = ((uint64_t)(buf.f_bavail)) * ((uint64_t)(buf.f_bsize));
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c
index 8c879da31..707a110c5 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c
@@ -4312,9 +4312,12 @@ dbmdb_import_init_writer(ImportJob *job, ImportRole_t role)
void
dbmdb_free_import_ctx(ImportJob *job)
{
- if (job->writer_ctx) {
- ImportCtx_t *ctx = job->writer_ctx;
- job->writer_ctx = NULL;
+ ImportCtx_t *ctx = NULL;
+ pthread_mutex_lock(get_import_ctx_mutex());
+ ctx = job->writer_ctx;
+ job->writer_ctx = NULL;
+ pthread_mutex_unlock(get_import_ctx_mutex());
+ if (ctx) {
pthread_mutex_destroy(&ctx->workerq.mutex);
pthread_cond_destroy(&ctx->workerq.cv);
slapi_ch_free((void**)&ctx->workerq.slots);
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c
index 6386ecf06..05f1e348d 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c
@@ -287,6 +287,13 @@ int add_dbi(dbi_open_ctx_t *octx, backend *be, const char *fname, int flags)
slapi_ch_free((void**)&treekey.dbname);
return octx->rc;
}
+ if (treekey.dbi >= ctx->dsecfg.max_dbs) {
+ octx->rc = MDB_DBS_FULL;
+ slapi_log_err(SLAPI_LOG_ERR, "add_dbi", "Failed to open database instance %s slots: %d/%d. Error is %d: %s.\n",
+ treekey.dbname, treekey.dbi, ctx->dsecfg.max_dbs, octx->rc, mdb_strerror(octx->rc));
+ slapi_ch_free((void**)&treekey.dbname);
+ return octx->rc;
+ }
if (octx->ai && octx->ai->ai_key_cmp_fn) {
octx->rc = dbmdb_update_dbi_cmp_fn(ctx, &treekey, octx->ai->ai_key_cmp_fn, octx->txn);
if (octx->rc) {
@@ -689,6 +696,7 @@ int dbmdb_make_env(dbmdb_ctx_t *ctx, int readOnly, mdb_mode_t mode)
rc = dbmdb_write_infofile(ctx);
} else {
/* No Config ==> read it from info file */
+ ctx->dsecfg = ctx->startcfg;
}
if (rc) {
return rc;
diff --git a/ldap/servers/slapd/back-ldbm/dbimpl.c b/ldap/servers/slapd/back-ldbm/dbimpl.c
index 86df986bd..f3bf68a9f 100644
--- a/ldap/servers/slapd/back-ldbm/dbimpl.c
+++ b/ldap/servers/slapd/back-ldbm/dbimpl.c
@@ -505,7 +505,7 @@ int dblayer_show_statistics(const char *dbimpl_name, const char *dbhome, FILE *f
li->li_plugin = be->be_database;
li->li_plugin->plg_name = (char*) "back-ldbm-dbimpl";
li->li_plugin->plg_libpath = (char*) "libback-ldbm";
- li->li_directory = (char*)dbhome;
+ li->li_directory = get_li_directory(dbhome);
/* Initialize database plugin */
rc = dbimpl_setup(li, dbimpl_name);
diff --git a/ldap/servers/slapd/back-ldbm/import.c b/ldap/servers/slapd/back-ldbm/import.c
index 2bb8cb581..30ec462fa 100644
--- a/ldap/servers/slapd/back-ldbm/import.c
+++ b/ldap/servers/slapd/back-ldbm/import.c
@@ -27,6 +27,9 @@
#define NEED_DN_NORM_SP -25
#define NEED_DN_NORM_BT -26
+/* Protect against import context destruction */
+static pthread_mutex_t import_ctx_mutex = PTHREAD_MUTEX_INITIALIZER;
+
/********** routines to manipulate the entry fifo **********/
@@ -143,6 +146,14 @@ ldbm_back_wire_import(Slapi_PBlock *pb)
/* Threads management */
+/* Return the mutex that protects against import context destruction */
+pthread_mutex_t *
+get_import_ctx_mutex()
+{
+ return &import_ctx_mutex;
+}
+
+
/* tell all the threads to abort */
void
import_abort_all(ImportJob *job, int wait_for_them)
@@ -151,7 +162,7 @@ import_abort_all(ImportJob *job, int wait_for_them)
/* tell all the worker threads to abort */
job->flags |= FLAG_ABORT;
-
+ pthread_mutex_lock(&import_ctx_mutex);
for (worker = job->worker_list; worker; worker = worker->next)
worker->command = ABORT;
@@ -167,6 +178,7 @@ import_abort_all(ImportJob *job, int wait_for_them)
}
}
}
+ pthread_mutex_unlock(&import_ctx_mutex);
}
--
2.48.0

View file

@ -0,0 +1,765 @@
From 446bc42e7b64a8496c2c3fe486f86bba318bed5e Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 7 Jan 2026 16:55:27 -0500
Subject: [PATCH] Issue - Revise paged result search locking
Description:
Move to a single lock approach verses having two locks. This will impact
concurrency when multiple async paged result searches are done on the same
connection, but it simplifies the code and avoids race conditions and
deadlocks.
Relates: https://github.com/389ds/389-ds-base/issues/7118
Reviewed by: progier & tbordaz (Thanks!!)
---
ldap/servers/slapd/abandon.c | 2 +-
ldap/servers/slapd/opshared.c | 60 ++++----
ldap/servers/slapd/pagedresults.c | 228 +++++++++++++++++++-----------
ldap/servers/slapd/proto-slap.h | 26 ++--
ldap/servers/slapd/slap.h | 5 +-
5 files changed, 187 insertions(+), 134 deletions(-)
diff --git a/ldap/servers/slapd/abandon.c b/ldap/servers/slapd/abandon.c
index 6024fcd31..1f47c531c 100644
--- a/ldap/servers/slapd/abandon.c
+++ b/ldap/servers/slapd/abandon.c
@@ -179,7 +179,7 @@ do_abandon(Slapi_PBlock *pb)
logpb.tv_sec = -1;
logpb.tv_nsec = -1;
- if (0 == pagedresults_free_one_msgid(pb_conn, id, pageresult_lock_get_addr(pb_conn))) {
+ if (0 == pagedresults_free_one_msgid(pb_conn, id, PR_NOT_LOCKED)) {
if (log_format != LOG_FORMAT_DEFAULT) {
/* JSON logging */
logpb.target_op = "Simple Paged Results";
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index a5cddfd23..bf800f7dc 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -572,8 +572,8 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
be = be_list[index];
}
}
- pr_search_result = pagedresults_get_search_result(pb_conn, operation, 0 /*not locked*/, pr_idx);
- estimate = pagedresults_get_search_result_set_size_estimate(pb_conn, operation, pr_idx);
+ pr_search_result = pagedresults_get_search_result(pb_conn, operation, PR_NOT_LOCKED, pr_idx);
+ estimate = pagedresults_get_search_result_set_size_estimate(pb_conn, operation, PR_NOT_LOCKED, pr_idx);
/* Set operation note flags as required. */
if (pagedresults_get_unindexed(pb_conn, operation, pr_idx)) {
slapi_pblock_set_flag_operation_notes(pb, SLAPI_OP_NOTE_UNINDEXED);
@@ -619,14 +619,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
int32_t tlimit;
slapi_pblock_get(pb, SLAPI_SEARCH_TIMELIMIT, &tlimit);
pagedresults_set_timelimit(pb_conn, operation, (time_t)tlimit, pr_idx);
- /* When using this mutex in conjunction with the main paged
- * result lock, you must do so in this order:
- *
- * --> pagedresults_lock()
- * --> pagedresults_mutex
- * <-- pagedresults_mutex
- * <-- pagedresults_unlock()
- */
+ /* IMPORTANT: Never acquire pagedresults_mutex when holding c_mutex. */
pagedresults_mutex = pageresult_lock_get_addr(pb_conn);
}
@@ -743,17 +736,15 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
if (op_is_pagedresults(operation) && pr_search_result) {
void *sr = NULL;
/* PAGED RESULTS and already have the search results from the prev op */
- pagedresults_lock(pb_conn, pr_idx);
/*
* In async paged result case, the search result might be released
* by other theads. We need to double check it in the locked region.
*/
pthread_mutex_lock(pagedresults_mutex);
- pr_search_result = pagedresults_get_search_result(pb_conn, operation, 1 /*locked*/, pr_idx);
+ pr_search_result = pagedresults_get_search_result(pb_conn, operation, PR_LOCKED, pr_idx);
if (pr_search_result) {
- if (pagedresults_is_abandoned_or_notavailable(pb_conn, 1 /*locked*/, pr_idx)) {
+ if (pagedresults_is_abandoned_or_notavailable(pb_conn, PR_LOCKED, pr_idx)) {
pthread_mutex_unlock(pagedresults_mutex);
- pagedresults_unlock(pb_conn, pr_idx);
/* Previous operation was abandoned and the simplepaged object is not in use. */
send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL);
rc = LDAP_SUCCESS;
@@ -764,14 +755,13 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
/* search result could be reset in the backend/dse */
slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET, &sr);
- pagedresults_set_search_result(pb_conn, operation, sr, 1 /*locked*/, pr_idx);
+ pagedresults_set_search_result(pb_conn, operation, sr, PR_LOCKED, pr_idx);
}
} else {
pr_stat = PAGEDRESULTS_SEARCH_END;
rc = LDAP_SUCCESS;
}
pthread_mutex_unlock(pagedresults_mutex);
- pagedresults_unlock(pb_conn, pr_idx);
if ((PAGEDRESULTS_SEARCH_END == pr_stat) || (0 == pnentries)) {
/* no more entries to send in the backend */
@@ -789,22 +779,22 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
}
pagedresults_set_response_control(pb, 0, estimate,
curr_search_count, pr_idx);
- if (pagedresults_get_with_sort(pb_conn, operation, pr_idx)) {
+ if (pagedresults_get_with_sort(pb_conn, operation, PR_NOT_LOCKED, pr_idx)) {
sort_make_sort_response_control(pb, CONN_GET_SORT_RESULT_CODE, NULL);
}
pagedresults_set_search_result_set_size_estimate(pb_conn,
operation,
- estimate, pr_idx);
+ estimate, PR_NOT_LOCKED, pr_idx);
if (PAGEDRESULTS_SEARCH_END == pr_stat) {
- pagedresults_lock(pb_conn, pr_idx);
+ pthread_mutex_lock(pagedresults_mutex);
slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, NULL);
- if (!pagedresults_is_abandoned_or_notavailable(pb_conn, 0 /*not locked*/, pr_idx)) {
- pagedresults_free_one(pb_conn, operation, pr_idx);
+ if (!pagedresults_is_abandoned_or_notavailable(pb_conn, PR_LOCKED, pr_idx)) {
+ pagedresults_free_one(pb_conn, operation, PR_LOCKED, pr_idx);
}
- pagedresults_unlock(pb_conn, pr_idx);
+ pthread_mutex_unlock(pagedresults_mutex);
if (next_be) {
/* no more entries, but at least another backend */
- if (pagedresults_set_current_be(pb_conn, next_be, pr_idx, 0) < 0) {
+ if (pagedresults_set_current_be(pb_conn, next_be, pr_idx, PR_NOT_LOCKED) < 0) {
goto free_and_return;
}
}
@@ -915,7 +905,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
}
}
pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx);
- rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, 1);
+ rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, PR_LOCKED);
pthread_mutex_unlock(pagedresults_mutex);
#pragma GCC diagnostic pop
}
@@ -954,7 +944,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
pthread_mutex_lock(pagedresults_mutex);
pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx);
be->be_search_results_release(&sr);
- rc = pagedresults_set_current_be(pb_conn, next_be, pr_idx, 1);
+ rc = pagedresults_set_current_be(pb_conn, next_be, pr_idx, PR_LOCKED);
pthread_mutex_unlock(pagedresults_mutex);
pr_stat = PAGEDRESULTS_SEARCH_END; /* make sure stat is SEARCH_END */
if (NULL == next_be) {
@@ -967,23 +957,23 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
} else {
curr_search_count = pnentries;
slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET_SIZE_ESTIMATE, &estimate);
- pagedresults_lock(pb_conn, pr_idx);
- if ((pagedresults_set_current_be(pb_conn, be, pr_idx, 0) < 0) ||
- (pagedresults_set_search_result(pb_conn, operation, sr, 0, pr_idx) < 0) ||
- (pagedresults_set_search_result_count(pb_conn, operation, curr_search_count, pr_idx) < 0) ||
- (pagedresults_set_search_result_set_size_estimate(pb_conn, operation, estimate, pr_idx) < 0) ||
- (pagedresults_set_with_sort(pb_conn, operation, with_sort, pr_idx) < 0)) {
- pagedresults_unlock(pb_conn, pr_idx);
+ pthread_mutex_lock(pagedresults_mutex);
+ if ((pagedresults_set_current_be(pb_conn, be, pr_idx, PR_LOCKED) < 0) ||
+ (pagedresults_set_search_result(pb_conn, operation, sr, PR_LOCKED, pr_idx) < 0) ||
+ (pagedresults_set_search_result_count(pb_conn, operation, curr_search_count, PR_LOCKED, pr_idx) < 0) ||
+ (pagedresults_set_search_result_set_size_estimate(pb_conn, operation, estimate, PR_LOCKED, pr_idx) < 0) ||
+ (pagedresults_set_with_sort(pb_conn, operation, with_sort, PR_LOCKED, pr_idx) < 0)) {
+ pthread_mutex_unlock(pagedresults_mutex);
cache_return_target_entry(pb, be, operation);
goto free_and_return;
}
- pagedresults_unlock(pb_conn, pr_idx);
+ pthread_mutex_unlock(pagedresults_mutex);
}
slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, NULL);
next_be = NULL; /* to break the loop */
if (operation->o_status & SLAPI_OP_STATUS_ABANDONED) {
/* It turned out this search was abandoned. */
- pagedresults_free_one_msgid(pb_conn, operation->o_msgid, pagedresults_mutex);
+ pagedresults_free_one_msgid(pb_conn, operation->o_msgid, PR_NOT_LOCKED);
/* paged-results-request was abandoned; making an empty cookie. */
pagedresults_set_response_control(pb, 0, estimate, -1, pr_idx);
send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL);
@@ -993,7 +983,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
}
pagedresults_set_response_control(pb, 0, estimate, curr_search_count, pr_idx);
if (curr_search_count == -1) {
- pagedresults_free_one(pb_conn, operation, pr_idx);
+ pagedresults_free_one(pb_conn, operation, PR_NOT_LOCKED, pr_idx);
}
}
diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c
index 941ab97e3..0d6c4a1aa 100644
--- a/ldap/servers/slapd/pagedresults.c
+++ b/ldap/servers/slapd/pagedresults.c
@@ -34,9 +34,9 @@ pageresult_lock_cleanup()
slapi_ch_free((void**)&lock_hash);
}
-/* Beware to the lock order with c_mutex:
- * c_mutex is sometime locked while holding pageresult_lock
- * ==> Do not lock pageresult_lock when holing c_mutex
+/* Lock ordering constraint with c_mutex:
+ * c_mutex is sometimes locked while holding pageresult_lock.
+ * Therefore: DO NOT acquire pageresult_lock when holding c_mutex.
*/
pthread_mutex_t *
pageresult_lock_get_addr(Connection *conn)
@@ -44,7 +44,11 @@ pageresult_lock_get_addr(Connection *conn)
return &lock_hash[(((size_t)conn)/sizeof (Connection))%LOCK_HASH_SIZE];
}
-/* helper function to clean up one prp slot */
+/* helper function to clean up one prp slot
+ *
+ * NOTE: This function must be called while holding the pageresult_lock
+ * (via pageresult_lock_get_addr(conn)) to ensure thread-safe cleanup.
+ */
static void
_pr_cleanup_one_slot(PagedResults *prp)
{
@@ -56,7 +60,7 @@ _pr_cleanup_one_slot(PagedResults *prp)
prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set));
}
- /* clean up the slot except the mutex */
+ /* clean up the slot */
prp->pr_current_be = NULL;
prp->pr_search_result_set = NULL;
prp->pr_search_result_count = 0;
@@ -136,6 +140,8 @@ pagedresults_parse_control_value(Slapi_PBlock *pb,
return LDAP_UNWILLING_TO_PERFORM;
}
+ /* Acquire hash-based lock for paged results list access
+ * IMPORTANT: Never acquire this lock when holding c_mutex */
pthread_mutex_lock(pageresult_lock_get_addr(conn));
/* the ber encoding is no longer needed */
ber_free(ber, 1);
@@ -184,10 +190,6 @@ pagedresults_parse_control_value(Slapi_PBlock *pb,
goto bail;
}
- if ((*index > -1) && (*index < conn->c_pagedresults.prl_maxlen) &&
- !conn->c_pagedresults.prl_list[*index].pr_mutex) {
- conn->c_pagedresults.prl_list[*index].pr_mutex = PR_NewLock();
- }
conn->c_pagedresults.prl_count++;
} else {
/* Repeated paged results request.
@@ -327,8 +329,14 @@ bailout:
"<= idx=%d\n", index);
}
+/*
+ * Free one paged result entry by index.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_free_one(Connection *conn, Operation *op, int index)
+pagedresults_free_one(Connection *conn, Operation *op, bool locked, int index)
{
int rc = -1;
@@ -338,7 +346,9 @@ pagedresults_free_one(Connection *conn, Operation *op, int index)
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one",
"=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (conn->c_pagedresults.prl_count <= 0) {
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one",
"conn=%" PRIu64 " paged requests list count is %d\n",
@@ -349,7 +359,9 @@ pagedresults_free_one(Connection *conn, Operation *op, int index)
conn->c_pagedresults.prl_count--;
rc = 0;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one", "<= %d\n", rc);
@@ -357,21 +369,28 @@ pagedresults_free_one(Connection *conn, Operation *op, int index)
}
/*
- * Used for abandoning - pageresult_lock_get_addr(conn) is already locked in do_abandone.
+ * Free one paged result entry by message ID.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
*/
int
-pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *mutex)
+pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, bool locked)
{
int rc = -1;
int i;
+ pthread_mutex_t *lock = NULL;
if (conn && (msgid > -1)) {
if (conn->c_pagedresults.prl_maxlen <= 0) {
; /* Not a paged result. */
} else {
slapi_log_err(SLAPI_LOG_TRACE,
- "pagedresults_free_one_msgid_nolock", "=> msgid=%d\n", msgid);
- pthread_mutex_lock(mutex);
+ "pagedresults_free_one_msgid", "=> msgid=%d\n", msgid);
+ lock = pageresult_lock_get_addr(conn);
+ if (!locked) {
+ pthread_mutex_lock(lock);
+ }
for (i = 0; i < conn->c_pagedresults.prl_maxlen; i++) {
if (conn->c_pagedresults.prl_list[i].pr_msgid == msgid) {
PagedResults *prp = conn->c_pagedresults.prl_list + i;
@@ -390,9 +409,11 @@ pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *
break;
}
}
- pthread_mutex_unlock(mutex);
+ if (!locked) {
+ pthread_mutex_unlock(lock);
+ }
slapi_log_err(SLAPI_LOG_TRACE,
- "pagedresults_free_one_msgid_nolock", "<= %d\n", rc);
+ "pagedresults_free_one_msgid", "<= %d\n", rc);
}
}
@@ -418,29 +439,43 @@ pagedresults_get_current_be(Connection *conn, int index)
return be;
}
+/*
+ * Set current backend for a paged result entry.
+ *
+ * Locking: If locked=false, acquires pageresult_lock. If locked=true, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, int nolock)
+pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, bool locked)
{
int rc = -1;
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_current_be", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- if (!nolock)
+ if (!locked) {
pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
conn->c_pagedresults.prl_list[index].pr_current_be = be;
}
rc = 0;
- if (!nolock)
+ if (!locked) {
pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_current_be", "<= %d\n", rc);
return rc;
}
+/*
+ * Get search result set for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
void *
-pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int index)
+pagedresults_get_search_result(Connection *conn, Operation *op, bool locked, int index)
{
void *sr = NULL;
if (!op_is_pagedresults(op)) {
@@ -465,8 +500,14 @@ pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int
return sr;
}
+/*
+ * Set search result set for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int locked, int index)
+pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, bool locked, int index)
{
int rc = -1;
if (!op_is_pagedresults(op)) {
@@ -494,8 +535,14 @@ pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int lo
return rc;
}
+/*
+ * Get search result count for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_get_search_result_count(Connection *conn, Operation *op, int index)
+pagedresults_get_search_result_count(Connection *conn, Operation *op, bool locked, int index)
{
int count = 0;
if (!op_is_pagedresults(op)) {
@@ -504,19 +551,29 @@ pagedresults_get_search_result_count(Connection *conn, Operation *op, int index)
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_search_result_count", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
count = conn->c_pagedresults.prl_list[index].pr_search_result_count;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_search_result_count", "<= %d\n", count);
return count;
}
+/*
+ * Set search result count for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, int index)
+pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, bool locked, int index)
{
int rc = -1;
if (!op_is_pagedresults(op)) {
@@ -525,11 +582,15 @@ pagedresults_set_search_result_count(Connection *conn, Operation *op, int count,
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_search_result_count", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
conn->c_pagedresults.prl_list[index].pr_search_result_count = count;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
rc = 0;
}
slapi_log_err(SLAPI_LOG_TRACE,
@@ -537,9 +598,16 @@ pagedresults_set_search_result_count(Connection *conn, Operation *op, int count,
return rc;
}
+/*
+ * Get search result set size estimate for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
pagedresults_get_search_result_set_size_estimate(Connection *conn,
Operation *op,
+ bool locked,
int index)
{
int count = 0;
@@ -550,11 +618,15 @@ pagedresults_get_search_result_set_size_estimate(Connection *conn,
"pagedresults_get_search_result_set_size_estimate",
"=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
count = conn->c_pagedresults.prl_list[index].pr_search_result_set_size_estimate;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_search_result_set_size_estimate", "<= %d\n",
@@ -562,10 +634,17 @@ pagedresults_get_search_result_set_size_estimate(Connection *conn,
return count;
}
+/*
+ * Set search result set size estimate for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
pagedresults_set_search_result_set_size_estimate(Connection *conn,
Operation *op,
int count,
+ bool locked,
int index)
{
int rc = -1;
@@ -576,11 +655,15 @@ pagedresults_set_search_result_set_size_estimate(Connection *conn,
"pagedresults_set_search_result_set_size_estimate",
"=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
conn->c_pagedresults.prl_list[index].pr_search_result_set_size_estimate = count;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
rc = 0;
}
slapi_log_err(SLAPI_LOG_TRACE,
@@ -589,8 +672,14 @@ pagedresults_set_search_result_set_size_estimate(Connection *conn,
return rc;
}
+/*
+ * Get with_sort flag for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_get_with_sort(Connection *conn, Operation *op, int index)
+pagedresults_get_with_sort(Connection *conn, Operation *op, bool locked, int index)
{
int flags = 0;
if (!op_is_pagedresults(op)) {
@@ -599,19 +688,29 @@ pagedresults_get_with_sort(Connection *conn, Operation *op, int index)
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_with_sort", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
flags = conn->c_pagedresults.prl_list[index].pr_flags & CONN_FLAG_PAGEDRESULTS_WITH_SORT;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_with_sort", "<= %d\n", flags);
return flags;
}
+/*
+ * Set with_sort flag for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index)
+pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, bool locked, int index)
{
int rc = -1;
if (!op_is_pagedresults(op)) {
@@ -620,14 +719,18 @@ pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_with_sort", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
if (flags & OP_FLAG_SERVER_SIDE_SORTING) {
conn->c_pagedresults.prl_list[index].pr_flags |=
CONN_FLAG_PAGEDRESULTS_WITH_SORT;
}
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
rc = 0;
}
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_set_with_sort", "<= %d\n", rc);
@@ -802,10 +905,6 @@ pagedresults_cleanup(Connection *conn, int needlock)
rc = 1;
}
prp->pr_current_be = NULL;
- if (prp->pr_mutex) {
- PR_DestroyLock(prp->pr_mutex);
- prp->pr_mutex = NULL;
- }
memset(prp, '\0', sizeof(PagedResults));
}
conn->c_pagedresults.prl_count = 0;
@@ -840,10 +939,6 @@ pagedresults_cleanup_all(Connection *conn, int needlock)
i < conn->c_pagedresults.prl_maxlen;
i++) {
prp = conn->c_pagedresults.prl_list + i;
- if (prp->pr_mutex) {
- PR_DestroyLock(prp->pr_mutex);
- prp->pr_mutex = NULL;
- }
if (prp->pr_current_be && prp->pr_search_result_set &&
prp->pr_current_be->be_search_results_release) {
prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set));
@@ -1010,43 +1105,8 @@ op_set_pagedresults(Operation *op)
op->o_flags |= OP_FLAG_PAGED_RESULTS;
}
-/*
- * pagedresults_lock/unlock -- introduced to protect search results for the
- * asynchronous searches. Do not call these functions while the PR conn lock
- * is held (e.g. pageresult_lock_get_addr(conn))
- */
-void
-pagedresults_lock(Connection *conn, int index)
-{
- PagedResults *prp;
- if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) {
- return;
- }
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
- prp = conn->c_pagedresults.prl_list + index;
- if (prp->pr_mutex) {
- PR_Lock(prp->pr_mutex);
- }
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
-}
-
-void
-pagedresults_unlock(Connection *conn, int index)
-{
- PagedResults *prp;
- if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) {
- return;
- }
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
- prp = conn->c_pagedresults.prl_list + index;
- if (prp->pr_mutex) {
- PR_Unlock(prp->pr_mutex);
- }
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
-}
-
int
-pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index)
+pagedresults_is_abandoned_or_notavailable(Connection *conn, bool locked, int index)
{
PagedResults *prp;
int32_t result;
@@ -1066,7 +1126,7 @@ pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int inde
}
int
-pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, int locked)
+pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, bool locked)
{
int rc = -1;
Connection *conn = NULL;
diff --git a/ldap/servers/slapd/proto-slap.h b/ldap/servers/slapd/proto-slap.h
index 765c12bf5..455d6d718 100644
--- a/ldap/servers/slapd/proto-slap.h
+++ b/ldap/servers/slapd/proto-slap.h
@@ -1614,20 +1614,22 @@ pthread_mutex_t *pageresult_lock_get_addr(Connection *conn);
int pagedresults_parse_control_value(Slapi_PBlock *pb, struct berval *psbvp, ber_int_t *pagesize, int *index, Slapi_Backend *be);
void pagedresults_set_response_control(Slapi_PBlock *pb, int iscritical, ber_int_t estimate, int curr_search_count, int index);
Slapi_Backend *pagedresults_get_current_be(Connection *conn, int index);
-int pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, int nolock);
-void *pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int index);
-int pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int locked, int index);
-int pagedresults_get_search_result_count(Connection *conn, Operation *op, int index);
-int pagedresults_set_search_result_count(Connection *conn, Operation *op, int cnt, int index);
+int pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, bool locked);
+void *pagedresults_get_search_result(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, bool locked, int index);
+int pagedresults_get_search_result_count(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_set_search_result_count(Connection *conn, Operation *op, int cnt, bool locked, int index);
int pagedresults_get_search_result_set_size_estimate(Connection *conn,
Operation *op,
+ bool locked,
int index);
int pagedresults_set_search_result_set_size_estimate(Connection *conn,
Operation *op,
int cnt,
+ bool locked,
int index);
-int pagedresults_get_with_sort(Connection *conn, Operation *op, int index);
-int pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index);
+int pagedresults_get_with_sort(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, bool locked, int index);
int pagedresults_get_unindexed(Connection *conn, Operation *op, int index);
int pagedresults_set_unindexed(Connection *conn, Operation *op, int index);
int pagedresults_get_sort_result_code(Connection *conn, Operation *op, int index);
@@ -1639,15 +1641,13 @@ int pagedresults_cleanup(Connection *conn, int needlock);
int pagedresults_is_timedout_nolock(Connection *conn);
int pagedresults_reset_timedout_nolock(Connection *conn);
int pagedresults_in_use_nolock(Connection *conn);
-int pagedresults_free_one(Connection *conn, Operation *op, int index);
-int pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *mutex);
+int pagedresults_free_one(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, bool locked);
int op_is_pagedresults(Operation *op);
int pagedresults_cleanup_all(Connection *conn, int needlock);
void op_set_pagedresults(Operation *op);
-void pagedresults_lock(Connection *conn, int index);
-void pagedresults_unlock(Connection *conn, int index);
-int pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index);
-int pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, int locked);
+int pagedresults_is_abandoned_or_notavailable(Connection *conn, bool locked, int index);
+int pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, bool locked);
/*
* sort.c
diff --git a/ldap/servers/slapd/slap.h b/ldap/servers/slapd/slap.h
index 11c5602e3..d494931c2 100644
--- a/ldap/servers/slapd/slap.h
+++ b/ldap/servers/slapd/slap.h
@@ -89,6 +89,10 @@ static char ptokPBE[34] = "Internal (Software) Token ";
#include <stdbool.h>
#include <time.h> /* For timespec definitions */
+/* Macros for paged results lock parameter */
+#define PR_LOCKED true
+#define PR_NOT_LOCKED false
+
/* Provides our int types and platform specific requirements. */
#include <slapi_pal.h>
@@ -1669,7 +1673,6 @@ typedef struct _paged_results
struct timespec pr_timelimit_hr; /* expiry time of this request rel to clock monotonic */
int pr_flags;
ber_int_t pr_msgid; /* msgid of the request; to abandon */
- PRLock *pr_mutex; /* protect each conn structure */
} PagedResults;
/* array of simple paged structure stashed in connection */
--
2.52.0

View file

@ -1,72 +0,0 @@
From 6b80ba631161219093267e8e4c885bfc392d3d61 Mon Sep 17 00:00:00 2001
From: progier389 <progier@redhat.com>
Date: Fri, 6 Sep 2024 14:45:06 +0200
Subject: [PATCH] Issue 6090 - Fix dbscan options and man pages (#6315)
* Issue 6090 - Fix dbscan options and man pages
dbscan -d option is dangerously confusing as it removes a database instance while in db_stat it identify the database
(cf issue #5609 ).
This fix implements long options in dbscan, rename -d in --remove, and requires a new --do-it option for action that change the database content.
The fix should also align both the usage and the dbscan man page with the new set of options
Issue: #6090
Reviewed by: @tbordaz, @droideck (Thanks!)
(cherry picked from commit 25e1d16887ebd299dfe0088080b9ee0deec1e41f)
---
ldap/servers/slapd/back-ldbm/dbimpl.c | 5 ++++-
src/lib389/lib389/cli_ctl/dblib.py | 13 ++++++++++++-
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/dbimpl.c b/ldap/servers/slapd/back-ldbm/dbimpl.c
index f3bf68a9f..83662df8c 100644
--- a/ldap/servers/slapd/back-ldbm/dbimpl.c
+++ b/ldap/servers/slapd/back-ldbm/dbimpl.c
@@ -481,7 +481,10 @@ int dblayer_private_close(Slapi_Backend **be, dbi_env_t **env, dbi_db_t **db)
slapi_ch_free_string(&li->li_directory);
slapi_ch_free((void**)&li->li_dblayer_private);
slapi_ch_free((void**)&li->li_dblayer_config);
- ldbm_config_destroy(li);
+ if (dblayer_is_lmdb(*be)) {
+ /* Generate use after free and double free in bdb case */
+ ldbm_config_destroy(li);
+ }
slapi_ch_free((void**)&(*be)->be_database);
slapi_ch_free((void**)&(*be)->be_instance_info);
slapi_ch_free((void**)be);
diff --git a/src/lib389/lib389/cli_ctl/dblib.py b/src/lib389/lib389/cli_ctl/dblib.py
index 053a72d61..318ae5ae9 100644
--- a/src/lib389/lib389/cli_ctl/dblib.py
+++ b/src/lib389/lib389/cli_ctl/dblib.py
@@ -199,6 +199,14 @@ def run_dbscan(args):
return output
+def does_dbscan_need_do_it():
+ prefix = os.environ.get('PREFIX', "")
+ prog = f'{prefix}/bin/dbscan'
+ args = [ prog, '-h' ]
+ output = subprocess.run(args, encoding='utf-8', stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ return '--do-it' in output.stdout
+
+
def export_changelog(be, dblib):
# Export backend changelog
if not be['has_changelog']:
@@ -217,7 +225,10 @@ def import_changelog(be, dblib):
try:
cl5dbname = be['eccl5dbname'] if dblib == "bdb" else be['cl5dbname']
_log.info(f"Importing changelog {cl5dbname} from {be['cl5name']}")
- run_dbscan(['-D', dblib, '-f', cl5dbname, '--import', be['cl5name'], '--do-it'])
+ if does_dbscan_need_do_it():
+ run_dbscan(['-D', dblib, '-f', cl5dbname, '-I', be['cl5name'], '--do-it'])
+ else:
+ run_dbscan(['-D', dblib, '-f', cl5dbname, '-I', be['cl5name']])
return True
except subprocess.CalledProcessError as e:
return False
--
2.48.0

View file

@ -0,0 +1,183 @@
From 4936f953fa3b0726c2b178f135cd78dcac7463ba Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Thu, 8 Jan 2026 10:02:39 -0800
Subject: [PATCH] Issue 7108 - Fix shutdown crash in entry cache destruction
(#7163)
Description: The entry cache could experience LRU list corruption when
using pinned entries, leading to crashes during cache flush operations.
In entrycache_add_int(), when returning an existing cached entry, the
code checked the wrong entry's state before calling lru_delete(). It
checked the new entry 'e' but operated on the existing entry 'my_alt',
causing lru_delete() to be called on entries not in the LRU list. This
is fixed by checking my_alt's refcnt and pinned state instead.
In flush_hash(), pinned_remove() and lru_delete() were both called on
pinned entries. Since pinned entries are in the pinned list, calling
lru_delete() afterwards corrupted the list. This is fixed by calling
either pinned_remove() or lru_delete() based on the entry's state.
A NULL check is added in entrycache_flush() and dncache_flush() to
gracefully handle corrupted LRU lists and prevent crashes when
traversing backwards through the list encounters an unexpected NULL.
Entry pointers are now always cleared after lru_delete() removal to
prevent stale pointer issues in non-debug builds.
Fixes: https://github.com/389ds/389-ds-base/issues/7108
Reviewed by: @progier389, @vashirov (Thanks!!)
---
ldap/servers/slapd/back-ldbm/cache.c | 48 +++++++++++++++++++++++++---
1 file changed, 43 insertions(+), 5 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/cache.c b/ldap/servers/slapd/back-ldbm/cache.c
index 2e4126134..a87f30687 100644
--- a/ldap/servers/slapd/back-ldbm/cache.c
+++ b/ldap/servers/slapd/back-ldbm/cache.c
@@ -458,11 +458,13 @@ static void
lru_delete(struct cache *cache, void *ptr)
{
struct backcommon *e;
+
if (NULL == ptr) {
LOG("=> lru_delete\n<= lru_delete (null entry)\n");
return;
}
e = (struct backcommon *)ptr;
+
#ifdef LDAP_CACHE_DEBUG_LRU
pinned_verify(cache, __LINE__);
lru_verify(cache, e, 1);
@@ -475,8 +477,9 @@ lru_delete(struct cache *cache, void *ptr)
e->ep_lrunext->ep_lruprev = e->ep_lruprev;
else
cache->c_lrutail = e->ep_lruprev;
-#ifdef LDAP_CACHE_DEBUG_LRU
+ /* Always clear pointers after removal to prevent stale pointer issues */
e->ep_lrunext = e->ep_lruprev = NULL;
+#ifdef LDAP_CACHE_DEBUG_LRU
lru_verify(cache, e, 0);
#endif
}
@@ -633,9 +636,14 @@ flush_hash(struct cache *cache, struct timespec *start_time, int32_t type)
if (entry->ep_refcnt == 0) {
entry->ep_refcnt++;
if (entry->ep_state & ENTRY_STATE_PINNED) {
+ /* Entry is in pinned list, not LRU - remove from pinned only.
+ * pinned_remove clears lru pointers and won't add to LRU since refcnt > 0.
+ */
pinned_remove(cache, laste);
+ } else {
+ /* Entry is in LRU list - remove from LRU */
+ lru_delete(cache, laste);
}
- lru_delete(cache, laste);
if (type == ENTRY_CACHE) {
entrycache_remove_int(cache, laste);
entrycache_return(cache, (struct backentry **)&laste, PR_TRUE);
@@ -679,9 +687,14 @@ flush_hash(struct cache *cache, struct timespec *start_time, int32_t type)
if (entry->ep_refcnt == 0) {
entry->ep_refcnt++;
if (entry->ep_state & ENTRY_STATE_PINNED) {
+ /* Entry is in pinned list, not LRU - remove from pinned only.
+ * pinned_remove clears lru pointers and won't add to LRU since refcnt > 0.
+ */
pinned_remove(cache, laste);
+ } else {
+ /* Entry is in LRU list - remove from LRU */
+ lru_delete(cache, laste);
}
- lru_delete(cache, laste);
entrycache_remove_int(cache, laste);
entrycache_return(cache, (struct backentry **)&laste, PR_TRUE);
} else {
@@ -772,6 +785,11 @@ entrycache_flush(struct cache *cache)
} else {
e = BACK_LRU_PREV(e, struct backentry *);
}
+ if (e == NULL) {
+ slapi_log_err(SLAPI_LOG_WARNING, "entrycache_flush",
+ "Unexpected NULL entry while flushing cache - LRU list may be corrupted\n");
+ break;
+ }
ASSERT(e->ep_refcnt == 0);
e->ep_refcnt++;
if (entrycache_remove_int(cache, e) < 0) {
@@ -1160,6 +1178,7 @@ pinned_remove(struct cache *cache, void *ptr)
{
struct backentry *e = (struct backentry *)ptr;
ASSERT(e->ep_state & ENTRY_STATE_PINNED);
+
cache->c_pinned_ctx->npinned--;
cache->c_pinned_ctx->size -= e->ep_size;
e->ep_state &= ~ENTRY_STATE_PINNED;
@@ -1172,13 +1191,23 @@ pinned_remove(struct cache *cache, void *ptr)
cache->c_pinned_ctx->head = cache->c_pinned_ctx->tail = NULL;
} else {
cache->c_pinned_ctx->head = BACK_LRU_NEXT(e, struct backentry *);
+ /* Update new head's prev pointer to NULL */
+ if (cache->c_pinned_ctx->head) {
+ cache->c_pinned_ctx->head->ep_lruprev = NULL;
+ }
}
} else if (cache->c_pinned_ctx->tail == e) {
cache->c_pinned_ctx->tail = BACK_LRU_PREV(e, struct backentry *);
+ /* Update new tail's next pointer to NULL */
+ if (cache->c_pinned_ctx->tail) {
+ cache->c_pinned_ctx->tail->ep_lrunext = NULL;
+ }
} else {
+ /* Middle of list: update both neighbors to point to each other */
BACK_LRU_PREV(e, struct backentry *)->ep_lrunext = BACK_LRU_NEXT(e, struct backcommon *);
BACK_LRU_NEXT(e, struct backentry *)->ep_lruprev = BACK_LRU_PREV(e, struct backcommon *);
}
+ /* Clear the removed entry's pointers */
e->ep_lrunext = e->ep_lruprev = NULL;
if (e->ep_refcnt == 0) {
lru_add(cache, ptr);
@@ -1245,6 +1274,7 @@ pinned_add(struct cache *cache, void *ptr)
return false;
}
/* Now it is time to insert the entry in the pinned list */
+
cache->c_pinned_ctx->npinned++;
cache->c_pinned_ctx->size += e->ep_size;
e->ep_state |= ENTRY_STATE_PINNED;
@@ -1754,7 +1784,7 @@ entrycache_add_int(struct cache *cache, struct backentry *e, int state, struct b
* 3) ep_state: 0 && state: 0
* ==> increase the refcnt
*/
- if (e->ep_refcnt == 0)
+ if (e->ep_refcnt == 0 && (e->ep_state & ENTRY_STATE_PINNED) == 0)
lru_delete(cache, (void *)e);
e->ep_refcnt++;
e->ep_state &= ~ENTRY_STATE_UNAVAILABLE;
@@ -1781,7 +1811,7 @@ entrycache_add_int(struct cache *cache, struct backentry *e, int state, struct b
} else {
if (alt) {
*alt = my_alt;
- if (e->ep_refcnt == 0 && (e->ep_state & ENTRY_STATE_PINNED) == 0)
+ if (my_alt->ep_refcnt == 0 && (my_alt->ep_state & ENTRY_STATE_PINNED) == 0)
lru_delete(cache, (void *)*alt);
(*alt)->ep_refcnt++;
LOG("the entry %s already exists. returning existing entry %s (state: 0x%x)\n",
@@ -2379,6 +2409,14 @@ dncache_flush(struct cache *cache)
} else {
dn = BACK_LRU_PREV(dn, struct backdn *);
}
+ if (dn == NULL) {
+ /* Safety check: we should normally exit via the CACHE_LRU_HEAD check.
+ * If we get here, c_lruhead may be NULL or the LRU list is corrupted.
+ */
+ slapi_log_err(SLAPI_LOG_WARNING, "dncache_flush",
+ "Unexpected NULL entry while flushing cache - LRU list may be corrupted\n");
+ break;
+ }
ASSERT(dn->ep_refcnt == 0);
dn->ep_refcnt++;
if (dncache_remove_int(cache, dn) < 0) {
--
2.52.0

View file

@ -1,146 +0,0 @@
From dc8032856d51c382e266eea72f66284e70a0e40c Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Fri, 31 Jan 2025 08:54:27 -0500
Subject: [PATCH] Issue 6489 - After log rotation refresh the FD pointer
Description:
When flushing a log buffer we get a FD for log prior to checking if the
log should be rotated. If the log is rotated that FD reference is now
invalid, and it needs to be refrehed before proceeding
Relates: https://github.com/389ds/389-ds-base/issues/6489
Reviewed by: tbordaz(Thanks!)
---
.../suites/logging/log_flush_rotation_test.py | 81 +++++++++++++++++++
ldap/servers/slapd/log.c | 18 +++++
2 files changed, 99 insertions(+)
create mode 100644 dirsrvtests/tests/suites/logging/log_flush_rotation_test.py
diff --git a/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py b/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py
new file mode 100644
index 000000000..b33a622e1
--- /dev/null
+++ b/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py
@@ -0,0 +1,81 @@
+# --- BEGIN COPYRIGHT BLOCK ---
+# Copyright (C) 2025 Red Hat, Inc.
+# All rights reserved.
+#
+# License: GPL (version 3 or any later version).
+# See LICENSE for details.
+# --- END COPYRIGHT BLOCK ---
+#
+import os
+import logging
+import time
+import pytest
+from lib389._constants import DEFAULT_SUFFIX, PW_DM
+from lib389.tasks import ImportTask
+from lib389.idm.user import UserAccounts
+from lib389.topologies import topology_st as topo
+
+
+log = logging.getLogger(__name__)
+
+
+def test_log_flush_and_rotation_crash(topo):
+ """Make sure server does not crash whening flushing a buffer and rotating
+ the log at the same time
+
+ :id: d4b0af2f-48b2-45f5-ae8b-f06f692c3133
+ :setup: Standalone Instance
+ :steps:
+ 1. Enable all logs
+ 2. Enable log buffering for all logs
+ 3. Set rotation time unit to 1 minute
+ 4. Make sure server is still running after 1 minute
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ """
+
+ inst = topo.standalone
+
+ # Enable logging and buffering
+ inst.config.set("nsslapd-auditlog-logging-enabled", "on")
+ inst.config.set("nsslapd-accesslog-logbuffering", "on")
+ inst.config.set("nsslapd-auditlog-logbuffering", "on")
+ inst.config.set("nsslapd-errorlog-logbuffering", "on")
+ inst.config.set("nsslapd-securitylog-logbuffering", "on")
+
+ # Set rotation policy to trigger rotation asap
+ inst.config.set("nsslapd-accesslog-logrotationtimeunit", "minute")
+ inst.config.set("nsslapd-auditlog-logrotationtimeunit", "minute")
+ inst.config.set("nsslapd-errorlog-logrotationtimeunit", "minute")
+ inst.config.set("nsslapd-securitylog-logrotationtimeunit", "minute")
+
+ #
+ # Performs ops to populate all the logs
+ #
+ # Access & audit log
+ users = UserAccounts(topo.standalone, DEFAULT_SUFFIX)
+ user = users.create_test_user()
+ user.set("userPassword", PW_DM)
+ # Security log
+ user.bind(PW_DM)
+ # Error log
+ import_task = ImportTask(inst)
+ import_task.import_suffix_from_ldif(ldiffile="/not/here",
+ suffix=DEFAULT_SUFFIX)
+
+ # Wait a minute and make sure the server did not crash
+ log.info("Sleep until logs are flushed and rotated")
+ time.sleep(61)
+
+ assert inst.status()
+
+
+if __name__ == '__main__':
+ # Run isolated
+ # -s for DEBUG mode
+ CURRENT_FILE = os.path.realpath(__file__)
+ pytest.main(["-s", CURRENT_FILE])
+
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index 76f2b6768..7e2c980a4 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -6746,6 +6746,23 @@ log_refresh_state(int32_t log_type)
return 0;
}
}
+static LOGFD
+log_refresh_fd(int32_t log_type)
+{
+ switch (log_type) {
+ case SLAPD_ACCESS_LOG:
+ return loginfo.log_access_fdes;
+ case SLAPD_SECURITY_LOG:
+ return loginfo.log_security_fdes;
+ case SLAPD_AUDIT_LOG:
+ return loginfo.log_audit_fdes;
+ case SLAPD_AUDITFAIL_LOG:
+ return loginfo.log_auditfail_fdes;
+ case SLAPD_ERROR_LOG:
+ return loginfo.log_error_fdes;
+ }
+ return NULL;
+}
/* this function assumes the lock is already acquired */
/* if sync_now is non-zero, data is flushed to physical storage */
@@ -6857,6 +6874,7 @@ log_flush_buffer(LogBufferInfo *lbi, int log_type, int sync_now, int locked)
rotationtime_secs);
}
log_state = log_refresh_state(log_type);
+ fd = log_refresh_fd(log_type);
}
if (log_state & LOGGING_NEED_TITLE) {
--
2.48.0

View file

@ -0,0 +1,215 @@
From 742c12e0247ab64e87da000a4de2f3e5c99044ab Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Fri, 9 Jan 2026 11:39:50 +0100
Subject: [PATCH] Issue 7172 - Index ordering mismatch after upgrade (#7173)
Bug Description:
Commit daf731f55071d45eaf403a52b63d35f4e699ff28 introduced a regression.
After upgrading to a version that adds `integerOrderingMatch` matching
rule to `parentid` and `ancestorid` indexes, searches may return empty
or incorrect results.
This happens because the existing index data was created with
lexicographic ordering, but the new compare function expects integer
ordering. Index lookups fail because the compare function doesn't match
the data ordering.
The root cause is that `ldbm_instance_create_default_indexes()` calls
`attr_index_config()` unconditionally for `parentid` and `ancestorid`
indexes, which triggers `ainfo_dup()` to overwrite `ai_key_cmp_fn` on
existing indexes. This breaks indexes that were created without the
`integerOrderingMatch` matching rule.
Fix Description:
* Call `attr_index_config()` for `parentid` and `ancestorid` indexes
only if index config doesn't exist.
* Add `upgrade_check_id_index_matching_rule()` that logs an error on
server startup if `parentid` or `ancestorid` indexes are missing the
integerOrderingMatch matching rule, advising administrators to reindex.
Fixes: https://github.com/389ds/389-ds-base/issues/7172
Reviewed by: @tbordaz, @progier389, @droideck (Thanks!)
---
ldap/servers/slapd/back-ldbm/instance.c | 25 ++++--
ldap/servers/slapd/upgrade.c | 107 +++++++++++++++++++++++-
2 files changed, 123 insertions(+), 9 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
index cb002c379..71bf0f6fa 100644
--- a/ldap/servers/slapd/back-ldbm/instance.c
+++ b/ldap/servers/slapd/back-ldbm/instance.c
@@ -190,6 +190,7 @@ ldbm_instance_create_default_indexes(backend *be)
char *ancestorid_indexes_limit = NULL;
char *parentid_indexes_limit = NULL;
struct attrinfo *ai = NULL;
+ struct attrinfo *index_already_configured = NULL;
struct index_idlistsizeinfo *iter;
int cookie;
int limit;
@@ -248,10 +249,14 @@ ldbm_instance_create_default_indexes(backend *be)
ldbm_instance_config_add_index_entry(inst, e, flags);
slapi_entry_free(e);
- e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
- ldbm_instance_config_add_index_entry(inst, e, flags);
- attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
- slapi_entry_free(e);
+ ainfo_get(be, (char *)LDBM_PARENTID_STR, &ai);
+ index_already_configured = ai;
+ if (!index_already_configured) {
+ e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
+ ldbm_instance_config_add_index_entry(inst, e, flags);
+ attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
+ slapi_entry_free(e);
+ }
e = ldbm_instance_init_config_entry("objectclass", "eq", 0, 0, 0, 0, 0);
ldbm_instance_config_add_index_entry(inst, e, flags);
@@ -288,10 +293,14 @@ ldbm_instance_create_default_indexes(backend *be)
* ancestorid is special, there is actually no such attr type
* but we still want to use the attr index file APIs.
*/
- e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
- ldbm_instance_config_add_index_entry(inst, e, flags);
- attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
- slapi_entry_free(e);
+ ainfo_get(be, (char *)LDBM_ANCESTORID_STR, &ai);
+ index_already_configured = ai;
+ if (!index_already_configured) {
+ e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
+ ldbm_instance_config_add_index_entry(inst, e, flags);
+ attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
+ slapi_entry_free(e);
+ }
slapi_ch_free_string(&ancestorid_indexes_limit);
slapi_ch_free_string(&parentid_indexes_limit);
diff --git a/ldap/servers/slapd/upgrade.c b/ldap/servers/slapd/upgrade.c
index 858392564..b02e37ed6 100644
--- a/ldap/servers/slapd/upgrade.c
+++ b/ldap/servers/slapd/upgrade.c
@@ -330,6 +330,107 @@ upgrade_remove_subtree_rename(void)
return UPGRADE_SUCCESS;
}
+/*
+ * Check if parentid/ancestorid indexes are missing the integerOrderingMatch
+ * matching rule.
+ *
+ * This function logs a warning if we detect this condition, advising
+ * the administrator to reindex the affected attributes.
+ */
+static upgrade_status
+upgrade_check_id_index_matching_rule(void)
+{
+ struct slapi_pblock *pb = slapi_pblock_new();
+ Slapi_Entry **backends = NULL;
+ const char *be_base_dn = "cn=ldbm database,cn=plugins,cn=config";
+ const char *be_filter = "(objectclass=nsBackendInstance)";
+ const char *attrs_to_check[] = {"parentid", "ancestorid", NULL};
+ upgrade_status uresult = UPGRADE_SUCCESS;
+
+ /* Search for all backend instances */
+ slapi_search_internal_set_pb(
+ pb, be_base_dn,
+ LDAP_SCOPE_ONELEVEL,
+ be_filter, NULL, 0, NULL, NULL,
+ plugin_get_default_component_id(), 0);
+ slapi_search_internal_pb(pb);
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_SEARCH_ENTRIES, &backends);
+
+ if (backends) {
+ for (size_t be_idx = 0; backends[be_idx] != NULL; be_idx++) {
+ const char *be_name = slapi_entry_attr_get_ref(backends[be_idx], "cn");
+ if (!be_name) {
+ continue;
+ }
+
+ /* Check each attribute that should have integerOrderingMatch */
+ for (size_t attr_idx = 0; attrs_to_check[attr_idx] != NULL; attr_idx++) {
+ const char *attr_name = attrs_to_check[attr_idx];
+ struct slapi_pblock *idx_pb = slapi_pblock_new();
+ Slapi_Entry **idx_entries = NULL;
+ char *idx_dn = slapi_create_dn_string("cn=%s,cn=index,cn=%s,%s",
+ attr_name, be_name, be_base_dn);
+ char *idx_filter = "(objectclass=nsIndex)";
+ PRBool has_matching_rule = PR_FALSE;
+
+ if (!idx_dn) {
+ slapi_pblock_destroy(idx_pb);
+ continue;
+ }
+
+ slapi_search_internal_set_pb(
+ idx_pb, idx_dn,
+ LDAP_SCOPE_BASE,
+ idx_filter, NULL, 0, NULL, NULL,
+ plugin_get_default_component_id(), 0);
+ slapi_search_internal_pb(idx_pb);
+ slapi_pblock_get(idx_pb, SLAPI_PLUGIN_INTOP_SEARCH_ENTRIES, &idx_entries);
+
+ if (idx_entries && idx_entries[0]) {
+ /* Index exists, check if it has integerOrderingMatch */
+ Slapi_Attr *mr_attr = NULL;
+ if (slapi_entry_attr_find(idx_entries[0], "nsMatchingRule", &mr_attr) == 0) {
+ Slapi_Value *sval = NULL;
+ int idx;
+ for (idx = slapi_attr_first_value(mr_attr, &sval);
+ idx != -1;
+ idx = slapi_attr_next_value(mr_attr, idx, &sval)) {
+ const struct berval *bval = slapi_value_get_berval(sval);
+ if (bval && bval->bv_val &&
+ strcasecmp(bval->bv_val, "integerOrderingMatch") == 0) {
+ has_matching_rule = PR_TRUE;
+ break;
+ }
+ }
+ }
+
+ if (!has_matching_rule) {
+ /* Index exists but doesn't have integerOrderingMatch, log a warning */
+ slapi_log_err(SLAPI_LOG_ERR, "upgrade_check_id_index_matching_rule",
+ "Index '%s' in backend '%s' is missing 'nsMatchingRule: integerOrderingMatch'. "
+ "Incorrectly configured system indexes can lead to poor search performance, replication issues, and other operational problems. "
+ "To fix this, add the matching rule and reindex: "
+ "dsconf <instance> backend index set --add-mr integerOrderingMatch --attr %s %s && "
+ "dsconf <instance> backend index reindex --attr %s %s. "
+ "WARNING: Reindexing can be resource-intensive and may impact server performance on a live system. "
+ "Consider scheduling reindexing during maintenance windows or periods of low activity.\n",
+ attr_name, be_name, attr_name, be_name, attr_name, be_name);
+ }
+ }
+
+ slapi_ch_free_string(&idx_dn);
+ slapi_free_search_results_internal(idx_pb);
+ slapi_pblock_destroy(idx_pb);
+ }
+ }
+ }
+
+ slapi_free_search_results_internal(pb);
+ slapi_pblock_destroy(pb);
+
+ return uresult;
+}
+
/*
* Upgrade the base config of the PAM PTA plugin.
*
@@ -547,7 +648,11 @@ upgrade_server(void)
if (upgrade_pam_pta_default_config() != UPGRADE_SUCCESS) {
return UPGRADE_FAILURE;
}
-
+
+ if (upgrade_check_id_index_matching_rule() != UPGRADE_SUCCESS) {
+ return UPGRADE_FAILURE;
+ }
+
return UPGRADE_SUCCESS;
}
--
2.52.0

View file

@ -1,236 +0,0 @@
From 90460bfa66fb77118967927963572f69e097c4eb Mon Sep 17 00:00:00 2001
From: James Chapman <jachapma@redhat.com>
Date: Wed, 29 Jan 2025 17:41:55 +0000
Subject: [PATCH] Issue 6436 - MOD on a large group slow if substring index is
present (#6437)
Bug Description: If the substring index is configured for the group
membership attribute ( member or uniqueMember ), the removal of a
member from a large static group is pretty slow.
Fix Description: A solution to this issue would be to introduce
a new index to track a membership atttribute index. In the interm,
we add a check to healthcheck to inform the user of the implications
of this configuration.
Fixes: https://github.com/389ds/389-ds-base/issues/6436
Reviewed by: @Firstyear, @tbordaz, @droideck (Thanks)
---
.../suites/healthcheck/health_config_test.py | 89 ++++++++++++++++++-
src/lib389/lib389/lint.py | 15 ++++
src/lib389/lib389/plugins.py | 37 +++++++-
3 files changed, 137 insertions(+), 4 deletions(-)
diff --git a/dirsrvtests/tests/suites/healthcheck/health_config_test.py b/dirsrvtests/tests/suites/healthcheck/health_config_test.py
index e1e5398ab..f09bc8bb8 100644
--- a/dirsrvtests/tests/suites/healthcheck/health_config_test.py
+++ b/dirsrvtests/tests/suites/healthcheck/health_config_test.py
@@ -167,6 +167,7 @@ def test_healthcheck_RI_plugin_missing_indexes(topology_st):
MEMBER_DN = 'cn=member,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config'
standalone = topology_st.standalone
+ standalone.config.set("nsslapd-accesslog-logbuffering", "on")
log.info('Enable RI plugin')
plugin = ReferentialIntegrityPlugin(standalone)
@@ -188,7 +189,7 @@ def test_healthcheck_RI_plugin_missing_indexes(topology_st):
def test_healthcheck_MO_plugin_missing_indexes(topology_st):
- """Check if HealthCheck returns DSMOLE0002 code
+ """Check if HealthCheck returns DSMOLE0001 code
:id: 236b0ec2-13da-48fb-b65a-db7406d56d5d
:setup: Standalone instance
@@ -203,8 +204,8 @@ def test_healthcheck_MO_plugin_missing_indexes(topology_st):
:expectedresults:
1. Success
2. Success
- 3. Healthcheck reports DSMOLE0002 code and related details
- 4. Healthcheck reports DSMOLE0002 code and related details
+ 3. Healthcheck reports DSMOLE0001 code and related details
+ 4. Healthcheck reports DSMOLE0001 code and related details
5. Success
6. Healthcheck reports no issue found
7. Healthcheck reports no issue found
@@ -214,6 +215,7 @@ def test_healthcheck_MO_plugin_missing_indexes(topology_st):
MO_GROUP_ATTR = 'creatorsname'
standalone = topology_st.standalone
+ standalone.config.set("nsslapd-accesslog-logbuffering", "on")
log.info('Enable MO plugin')
plugin = MemberOfPlugin(standalone)
@@ -236,6 +238,87 @@ def test_healthcheck_MO_plugin_missing_indexes(topology_st):
standalone.restart()
+def test_healthcheck_MO_plugin_substring_index(topology_st):
+ """Check if HealthCheck returns DSMOLE0002 code when the
+ member, uniquemember attribute contains a substring index type
+
+ :id: 10954811-24ac-4886-8183-e30892f8e02d
+ :setup: Standalone instance
+ :steps:
+ 1. Create DS instance
+ 2. Configure the instance with MO Plugin
+ 3. Change index type to substring for member attribute
+ 4. Use HealthCheck without --json option
+ 5. Use HealthCheck with --json option
+ 6. Change index type back to equality for member attribute
+ 7. Use HealthCheck without --json option
+ 8. Use HealthCheck with --json option
+ 9. Change index type to substring for uniquemember attribute
+ 10. Use HealthCheck without --json option
+ 11. Use HealthCheck with --json option
+ 12. Change index type back to equality for uniquemember attribute
+ 13. Use HealthCheck without --json option
+ 14. Use HealthCheck with --json option
+
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Healthcheck reports DSMOLE0002 code and related details
+ 5. Healthcheck reports DSMOLE0002 code and related details
+ 6. Success
+ 7. Healthcheck reports no issue found
+ 8. Healthcheck reports no issue found
+ 9. Success
+ 10. Healthcheck reports DSMOLE0002 code and related details
+ 11. Healthcheck reports DSMOLE0002 code and related details
+ 12. Success
+ 13. Healthcheck reports no issue found
+ 14. Healthcheck reports no issue found
+ """
+
+ RET_CODE = 'DSMOLE0002'
+ MEMBER_DN = 'cn=member,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config'
+ UNIQUE_MEMBER_DN = 'cn=uniquemember,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config'
+
+ standalone = topology_st.standalone
+ standalone.config.set("nsslapd-accesslog-logbuffering", "on")
+
+ log.info('Enable MO plugin')
+ plugin = MemberOfPlugin(standalone)
+ plugin.disable()
+ plugin.enable()
+
+ log.info('Change the index type of the member attribute index to substring')
+ index = Index(topology_st.standalone, MEMBER_DN)
+ index.replace('nsIndexType', 'sub')
+
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
+
+ log.info('Set the index type of the member attribute index back to eq')
+ index.replace('nsIndexType', 'eq')
+
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
+
+ log.info('Change the index type of the uniquemember attribute index to substring')
+ index = Index(topology_st.standalone, UNIQUE_MEMBER_DN)
+ index.replace('nsIndexType', 'sub')
+
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
+
+ log.info('Set the index type of the uniquemember attribute index back to eq')
+ index.replace('nsIndexType', 'eq')
+
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
+
+ # Restart the instance after changing the plugin to avoid breaking the other tests
+ standalone.restart()
+
+
@pytest.mark.xfail(ds_is_older("1.4.1"), reason="Not implemented")
def test_healthcheck_virtual_attr_incorrectly_indexed(topology_st):
"""Check if HealthCheck returns DSVIRTLE0001 code
diff --git a/src/lib389/lib389/lint.py b/src/lib389/lib389/lint.py
index d0747f0f4..460bf64fc 100644
--- a/src/lib389/lib389/lint.py
+++ b/src/lib389/lib389/lint.py
@@ -270,6 +270,21 @@ database after adding the missing index type. Here is an example using dsconf:
"""
}
+DSMOLE0002 = {
+ 'dsle': 'DSMOLE0002',
+ 'severity': 'LOW',
+ 'description': 'Removal of a member can be slow ',
+ 'items': ['cn=memberof plugin,cn=plugins,cn=config', ],
+ 'detail': """If the substring index is configured for a membership attribute. The removal of a member
+from the large group can be slow.
+
+""",
+ 'fix': """If not required, you can remove the substring index type using dsconf:
+
+ # dsconf slapd-YOUR_INSTANCE backend index set --attr=ATTR BACKEND --del-type=sub
+"""
+}
+
# Disk Space check. Note - PARTITION is replaced by the calling function
DSDSLE0001 = {
'dsle': 'DSDSLE0001',
diff --git a/src/lib389/lib389/plugins.py b/src/lib389/lib389/plugins.py
index 67af93a14..31bbfa502 100644
--- a/src/lib389/lib389/plugins.py
+++ b/src/lib389/lib389/plugins.py
@@ -12,7 +12,7 @@ import copy
import os.path
from lib389 import tasks
from lib389._mapped_object import DSLdapObjects, DSLdapObject
-from lib389.lint import DSRILE0001, DSRILE0002, DSMOLE0001
+from lib389.lint import DSRILE0001, DSRILE0002, DSMOLE0001, DSMOLE0002
from lib389.utils import ensure_str, ensure_list_bytes
from lib389.schema import Schema
from lib389._constants import (
@@ -827,6 +827,41 @@ class MemberOfPlugin(Plugin):
report['check'] = f'memberof:attr_indexes'
yield report
+ def _lint_member_substring_index(self):
+ if self.status():
+ from lib389.backend import Backends
+ backends = Backends(self._instance).list()
+ membership_attrs = ['member', 'uniquemember']
+ container = self.get_attr_val_utf8_l("nsslapd-plugincontainerscope")
+ for backend in backends:
+ suffix = backend.get_attr_val_utf8_l('nsslapd-suffix')
+ if suffix == "cn=changelog":
+ # Always skip retro changelog
+ continue
+ if container is not None:
+ # Check if this backend is in the scope
+ if not container.endswith(suffix):
+ # skip this backend that is not in the scope
+ continue
+ indexes = backend.get_indexes()
+ for attr in membership_attrs:
+ report = copy.deepcopy(DSMOLE0002)
+ try:
+ index = indexes.get(attr)
+ types = index.get_attr_vals_utf8_l("nsIndexType")
+ if "sub" in types:
+ report['detail'] = report['detail'].replace('ATTR', attr)
+ report['detail'] = report['detail'].replace('BACKEND', suffix)
+ report['fix'] = report['fix'].replace('ATTR', attr)
+ report['fix'] = report['fix'].replace('BACKEND', suffix)
+ report['fix'] = report['fix'].replace('YOUR_INSTANCE', self._instance.serverid)
+ report['items'].append(suffix)
+ report['items'].append(attr)
+ report['check'] = f'attr:substring_index'
+ yield report
+ except KeyError:
+ continue
+
def get_attr(self):
"""Get memberofattr attribute"""
--
2.48.0

View file

@ -0,0 +1,67 @@
From f5de84e309d5a4435198c9cc9b31b5722979f1ff Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 12 Jan 2026 10:58:02 +0100
Subject: [PATCH 5/5] Issue 7172 - (2nd) Index ordering mismatch after upgrade
(#7180)
Commit 742c12e0247ab64e87da000a4de2f3e5c99044ab introduced a regression
where the check to skip creating parentid/ancestorid indexes if they
already exist was incorrect.
The `ainfo_get()` function falls back to returning
LDBM_PSEUDO_ATTR_DEFAULT attrinfo when the requested attribute is not
found.
Since LDBM_PSEUDO_ATTR_DEFAULT is created before the ancestorid check,
`ainfo_get()` returns LDBM_PSEUDO_ATTR_DEFAULT instead of NULL, causing
the ancestorid index creation to be skipped entirely.
When operations later try to use the ancestorid index, they fall back to
LDBM_PSEUDO_ATTR_DEFAULT, and attempting to open the .default dbi
mid-transaction fails with MDB_NOTFOUND (-30798).
Fix Description:
Instead of just checking if `ainfo_get()` returns non-NULL, verify that
the returned attrinfo is actually for the requested attribute.
Fixes: https://github.com/389ds/389-ds-base/issues/7172
Reviewed by: @tbordaz (Thanks!)
---
ldap/servers/slapd/back-ldbm/instance.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
index 71bf0f6fa..2a6e8cbb8 100644
--- a/ldap/servers/slapd/back-ldbm/instance.c
+++ b/ldap/servers/slapd/back-ldbm/instance.c
@@ -190,7 +190,7 @@ ldbm_instance_create_default_indexes(backend *be)
char *ancestorid_indexes_limit = NULL;
char *parentid_indexes_limit = NULL;
struct attrinfo *ai = NULL;
- struct attrinfo *index_already_configured = NULL;
+ int index_already_configured = 0;
struct index_idlistsizeinfo *iter;
int cookie;
int limit;
@@ -250,7 +250,8 @@ ldbm_instance_create_default_indexes(backend *be)
slapi_entry_free(e);
ainfo_get(be, (char *)LDBM_PARENTID_STR, &ai);
- index_already_configured = ai;
+ /* Check if the attrinfo is actually for parentid, not a fallback to .default */
+ index_already_configured = (ai != NULL && strcmp(ai->ai_type, LDBM_PARENTID_STR) == 0);
if (!index_already_configured) {
e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
ldbm_instance_config_add_index_entry(inst, e, flags);
@@ -294,7 +295,8 @@ ldbm_instance_create_default_indexes(backend *be)
* but we still want to use the attr index file APIs.
*/
ainfo_get(be, (char *)LDBM_ANCESTORID_STR, &ai);
- index_already_configured = ai;
+ /* Check if the attrinfo is actually for ancestorid, not a fallback to .default */
+ index_already_configured = (ai != NULL && strcmp(ai->ai_type, LDBM_ANCESTORID_STR) == 0);
if (!index_already_configured) {
e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
ldbm_instance_config_add_index_entry(inst, e, flags);
--
2.52.0

View file

@ -1,70 +0,0 @@
From dcb6298db5bfef4b2541f7c52682d153b424bfa7 Mon Sep 17 00:00:00 2001
From: James Chapman <jachapma@redhat.com>
Date: Tue, 4 Feb 2025 15:40:16 +0000
Subject: [PATCH] Issue 6566 - RI plugin failure to handle a modrdn for rename
of member of multiple groups (#6567)
Bug description:
With AM and RI plugins enabled, the rename of a user that is part of multiple groups
fails with a "value exists" error.
Fix description:
For a modrdn the RI plugin creates a new DN, before a modify is attempted check
if the new DN already exists in the attr being updated.
Fixes: https://github.com/389ds/389-ds-base/issues/6566
Reviewed by: @progier389 , @tbordaz (Thank you)
---
ldap/servers/plugins/referint/referint.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/ldap/servers/plugins/referint/referint.c b/ldap/servers/plugins/referint/referint.c
index 468fdc239..218863ea5 100644
--- a/ldap/servers/plugins/referint/referint.c
+++ b/ldap/servers/plugins/referint/referint.c
@@ -924,6 +924,7 @@ _update_all_per_mod(Slapi_DN *entrySDN, /* DN of the searched entry */
{
Slapi_Mods *smods = NULL;
char *newDN = NULL;
+ struct berval bv = {0};
char **dnParts = NULL;
char *sval = NULL;
char *newvalue = NULL;
@@ -1026,22 +1027,30 @@ _update_all_per_mod(Slapi_DN *entrySDN, /* DN of the searched entry */
}
/* else: normalize_rc < 0) Ignore the DN normalization error for now. */
+ bv.bv_val = newDN;
+ bv.bv_len = strlen(newDN);
p = PL_strstr(sval, slapi_sdn_get_ndn(origDN));
if (p == sval) {
/* (case 1) */
slapi_mods_add_string(smods, LDAP_MOD_DELETE, attrName, sval);
- slapi_mods_add_string(smods, LDAP_MOD_ADD, attrName, newDN);
-
+ /* Add only if the attr value does not exist */
+ if (VALUE_PRESENT != attr_value_find_wsi(attr, &bv, &v)) {
+ slapi_mods_add_string(smods, LDAP_MOD_ADD, attrName, newDN);
+ }
} else if (p) {
/* (case 2) */
slapi_mods_add_string(smods, LDAP_MOD_DELETE, attrName, sval);
*p = '\0';
newvalue = slapi_ch_smprintf("%s%s", sval, newDN);
- slapi_mods_add_string(smods, LDAP_MOD_ADD, attrName, newvalue);
+ /* Add only if the attr value does not exist */
+ if (VALUE_PRESENT != attr_value_find_wsi(attr, &bv, &v)) {
+ slapi_mods_add_string(smods, LDAP_MOD_ADD, attrName, newvalue);
+ }
slapi_ch_free_string(&newvalue);
}
/* else: value does not include the modified DN. Ignore it. */
slapi_ch_free_string(&sval);
+ bv = (struct berval){0};
}
rc = _do_modify(mod_pb, entrySDN, slapi_mods_get_ldapmods_byref(smods));
if (rc) {
--
2.48.0

View file

@ -1,566 +0,0 @@
From 8e3a484f88fc9f9a3fcdfdd685d4ad2ed3cbe5d9 Mon Sep 17 00:00:00 2001
From: progier389 <progier@redhat.com>
Date: Fri, 28 Jun 2024 18:56:49 +0200
Subject: [PATCH] Issue 6229 - After an initial failure, subsequent online
backups fail (#6230)
* Issue 6229 - After an initial failure, subsequent online backups will not work
Several issues related to backup task error handling:
Backends stay busy after the failure
Exit code is 0 in some cases
Crash if failing to open the backup directory
And a more general one:
lib389 Task DN collision
Solutions:
Always reset the busy flags that have been set
Ensure that 0 is not returned in error case
Avoid closing NULL directory descriptor
Use a timestamp having milliseconds precision to create the task DN
Issue: #6229
Reviewed by: @droideck (Thanks!)
(cherry picked from commit 04a0b6ac776a1d588ec2e10ff651e5015078ad21)
---
ldap/servers/slapd/back-ldbm/archive.c | 45 +++++-----
.../slapd/back-ldbm/db-mdb/mdb_layer.c | 3 +
src/lib389/lib389/__init__.py | 10 +--
src/lib389/lib389/tasks.py | 82 +++++++++----------
4 files changed, 70 insertions(+), 70 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/archive.c b/ldap/servers/slapd/back-ldbm/archive.c
index 0460a42f6..6658cc80a 100644
--- a/ldap/servers/slapd/back-ldbm/archive.c
+++ b/ldap/servers/slapd/back-ldbm/archive.c
@@ -16,6 +16,8 @@
#include "back-ldbm.h"
#include "dblayer.h"
+#define NO_OBJECT ((Object*)-1)
+
int
ldbm_temporary_close_all_instances(Slapi_PBlock *pb)
{
@@ -270,6 +272,7 @@ ldbm_back_ldbm2archive(Slapi_PBlock *pb)
int run_from_cmdline = 0;
Slapi_Task *task;
struct stat sbuf;
+ Object *last_busy_inst_obj = NO_OBJECT;
slapi_pblock_get(pb, SLAPI_PLUGIN_PRIVATE, &li);
slapi_pblock_get(pb, SLAPI_SEQ_VAL, &rawdirectory);
@@ -380,13 +383,12 @@ ldbm_back_ldbm2archive(Slapi_PBlock *pb)
/* to avoid conflict w/ import, do this check for commandline, as well */
{
- Object *inst_obj, *inst_obj2;
ldbm_instance *inst = NULL;
/* server is up -- mark all backends busy */
- for (inst_obj = objset_first_obj(li->li_instance_set); inst_obj;
- inst_obj = objset_next_obj(li->li_instance_set, inst_obj)) {
- inst = (ldbm_instance *)object_get_data(inst_obj);
+ for (last_busy_inst_obj = objset_first_obj(li->li_instance_set); last_busy_inst_obj;
+ last_busy_inst_obj = objset_next_obj(li->li_instance_set, last_busy_inst_obj)) {
+ inst = (ldbm_instance *)object_get_data(last_busy_inst_obj);
/* check if an import/restore is already ongoing... */
if (instance_set_busy(inst) != 0 || dblayer_in_import(inst) != 0) {
@@ -400,20 +402,6 @@ ldbm_back_ldbm2archive(Slapi_PBlock *pb)
"another task and cannot be disturbed.",
inst->inst_name);
}
-
- /* painfully, we have to clear the BUSY flags on the
- * backends we'd already marked...
- */
- for (inst_obj2 = objset_first_obj(li->li_instance_set);
- inst_obj2 && (inst_obj2 != inst_obj);
- inst_obj2 = objset_next_obj(li->li_instance_set,
- inst_obj2)) {
- inst = (ldbm_instance *)object_get_data(inst_obj2);
- instance_set_not_busy(inst);
- }
- if (inst_obj2 && inst_obj2 != inst_obj)
- object_release(inst_obj2);
- object_release(inst_obj);
goto err;
}
}
@@ -427,18 +415,26 @@ ldbm_back_ldbm2archive(Slapi_PBlock *pb)
goto err;
}
- if (!run_from_cmdline) {
+err:
+ /* Clear all BUSY flags that have been previously set */
+ if (last_busy_inst_obj != NO_OBJECT) {
ldbm_instance *inst;
Object *inst_obj;
- /* none of these backends are busy anymore */
- for (inst_obj = objset_first_obj(li->li_instance_set); inst_obj;
+ for (inst_obj = objset_first_obj(li->li_instance_set);
+ inst_obj && (inst_obj != last_busy_inst_obj);
inst_obj = objset_next_obj(li->li_instance_set, inst_obj)) {
inst = (ldbm_instance *)object_get_data(inst_obj);
instance_set_not_busy(inst);
}
+ if (last_busy_inst_obj != NULL) {
+ /* release last seen object for aborted objset_next_obj iterations */
+ if (inst_obj != NULL) {
+ object_release(inst_obj);
+ }
+ object_release(last_busy_inst_obj);
+ }
}
-err:
if (return_value) {
if (dir_bak) {
slapi_log_err(SLAPI_LOG_ERR,
@@ -727,7 +723,10 @@ ldbm_archive_config(char *bakdir, Slapi_Task *task)
}
error:
- PR_CloseDir(dirhandle);
+ if (NULL != dirhandle) {
+ PR_CloseDir(dirhandle);
+ dirhandle = NULL;
+ }
dse_backup_unlock();
slapi_ch_free_string(&backup_config_dir);
slapi_ch_free_string(&dse_file);
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
index 4a7beedeb..3ecc47170 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
@@ -983,6 +983,9 @@ dbmdb_backup(struct ldbminfo *li, char *dest_dir, Slapi_Task *task)
if (ldbm_archive_config(dest_dir, task) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "dbmdb_backup",
"Backup of config files failed or is incomplete\n");
+ if (0 == return_value) {
+ return_value = -1;
+ }
}
goto bail;
diff --git a/src/lib389/lib389/__init__.py b/src/lib389/lib389/__init__.py
index 368741a66..cb372c138 100644
--- a/src/lib389/lib389/__init__.py
+++ b/src/lib389/lib389/__init__.py
@@ -69,7 +69,7 @@ from lib389.utils import (
get_user_is_root)
from lib389.paths import Paths
from lib389.nss_ssl import NssSsl
-from lib389.tasks import BackupTask, RestoreTask
+from lib389.tasks import BackupTask, RestoreTask, Task
from lib389.dseldif import DSEldif
# mixin
@@ -1424,7 +1424,7 @@ class DirSrv(SimpleLDAPObject, object):
name, self.ds_paths.prefix)
# create the archive
- name = "backup_%s_%s.tar.gz" % (self.serverid, time.strftime("%m%d%Y_%H%M%S"))
+ name = "backup_%s_%s.tar.gz" % (self.serverid, Task.get_timestamp())
backup_file = os.path.join(backup_dir, name)
tar = tarfile.open(backup_file, "w:gz")
tar.extraction_filter = (lambda member, path: member)
@@ -2810,7 +2810,7 @@ class DirSrv(SimpleLDAPObject, object):
else:
# No output file specified. Use the default ldif location/name
cmd.append('-a')
- tnow = datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
+ tnow = Task.get_timestamp()
if bename:
ldifname = os.path.join(self.ds_paths.ldif_dir, "%s-%s-%s.ldif" % (self.serverid, bename, tnow))
else:
@@ -2881,7 +2881,7 @@ class DirSrv(SimpleLDAPObject, object):
if archive_dir is None:
# Use the instance name and date/time as the default backup name
- tnow = datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
+ tnow = Task.get_timestamp()
archive_dir = os.path.join(self.ds_paths.backup_dir, "%s-%s" % (self.serverid, tnow))
elif not archive_dir.startswith("/"):
# Relative path, append it to the bak directory
@@ -3506,7 +3506,7 @@ class DirSrv(SimpleLDAPObject, object):
if archive is None:
# Use the instance name and date/time as the default backup name
- tnow = datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
+ tnow = Task.get_timestamp()
if self.serverid is not None:
backup_dir_name = "%s-%s" % (self.serverid, tnow)
else:
diff --git a/src/lib389/lib389/tasks.py b/src/lib389/lib389/tasks.py
index 6c2adb5b2..6bf302862 100644
--- a/src/lib389/lib389/tasks.py
+++ b/src/lib389/lib389/tasks.py
@@ -118,7 +118,7 @@ class Task(DSLdapObject):
return super(Task, self).create(rdn, properties, basedn)
@staticmethod
- def _get_task_date():
+ def get_timestamp():
"""Return a timestamp to use in naming new task entries."""
return datetime.now().isoformat()
@@ -132,7 +132,7 @@ class AutomemberRebuildMembershipTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'automember_rebuild_' + Task._get_task_date()
+ self.cn = 'automember_rebuild_' + Task.get_timestamp()
dn = "cn=" + self.cn + "," + DN_AUTOMEMBER_REBUILD_TASK
super(AutomemberRebuildMembershipTask, self).__init__(instance, dn)
@@ -147,7 +147,7 @@ class AutomemberAbortRebuildTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'automember_abort_' + Task._get_task_date()
+ self.cn = 'automember_abort_' + Task.get_timestamp()
dn = "cn=" + self.cn + "," + DN_AUTOMEMBER_ABORT_REBUILD_TASK
super(AutomemberAbortRebuildTask, self).__init__(instance, dn)
@@ -161,7 +161,7 @@ class FixupLinkedAttributesTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'fixup_linked_attrs_' + Task._get_task_date()
+ self.cn = 'fixup_linked_attrs_' + Task.get_timestamp()
dn = "cn=" + self.cn + "," + DN_FIXUP_LINKED_ATTIBUTES
super(FixupLinkedAttributesTask, self).__init__(instance, dn)
@@ -175,7 +175,7 @@ class MemberUidFixupTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'memberUid_fixup_' + Task._get_task_date()
+ self.cn = 'memberUid_fixup_' + Task.get_timestamp()
dn = f"cn={self.cn},cn=memberuid task,cn=tasks,cn=config"
super(MemberUidFixupTask, self).__init__(instance, dn)
@@ -190,7 +190,7 @@ class MemberOfFixupTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'memberOf_fixup_' + Task._get_task_date()
+ self.cn = 'memberOf_fixup_' + Task.get_timestamp()
dn = "cn=" + self.cn + "," + DN_MBO_TASK
super(MemberOfFixupTask, self).__init__(instance, dn)
@@ -205,7 +205,7 @@ class USNTombstoneCleanupTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'usn_cleanup_' + Task._get_task_date()
+ self.cn = 'usn_cleanup_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=USN tombstone cleanup task," + DN_TASKS
super(USNTombstoneCleanupTask, self).__init__(instance, dn)
@@ -225,7 +225,7 @@ class csngenTestTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'csngenTest_' + Task._get_task_date()
+ self.cn = 'csngenTest_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=csngen_test," + DN_TASKS
super(csngenTestTask, self).__init__(instance, dn)
@@ -238,7 +238,7 @@ class EntryUUIDFixupTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'entryuuid_fixup_' + Task._get_task_date()
+ self.cn = 'entryuuid_fixup_' + Task.get_timestamp()
dn = "cn=" + self.cn + "," + DN_EUUID_TASK
super(EntryUUIDFixupTask, self).__init__(instance, dn)
self._must_attributes.extend(['basedn'])
@@ -252,7 +252,7 @@ class DBCompactTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'compact_db_' + Task._get_task_date()
+ self.cn = 'compact_db_' + Task.get_timestamp()
dn = "cn=" + self.cn + "," + DN_COMPACTDB_TASK
super(DBCompactTask, self).__init__(instance, dn)
@@ -265,7 +265,7 @@ class SchemaReloadTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'schema_reload_' + Task._get_task_date()
+ self.cn = 'schema_reload_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=schema reload task," + DN_TASKS
super(SchemaReloadTask, self).__init__(instance, dn)
@@ -278,7 +278,7 @@ class SyntaxValidateTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'syntax_validate_' + Task._get_task_date()
+ self.cn = 'syntax_validate_' + Task.get_timestamp()
dn = f"cn={self.cn},cn=syntax validate,cn=tasks,cn=config"
super(SyntaxValidateTask, self).__init__(instance, dn)
@@ -295,7 +295,7 @@ class AbortCleanAllRUVTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'abortcleanallruv_' + Task._get_task_date()
+ self.cn = 'abortcleanallruv_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=abort cleanallruv," + DN_TASKS
super(AbortCleanAllRUVTask, self).__init__(instance, dn)
@@ -312,7 +312,7 @@ class CleanAllRUVTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'cleanallruv_' + Task._get_task_date()
+ self.cn = 'cleanallruv_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=cleanallruv," + DN_TASKS
self._properties = None
@@ -359,7 +359,7 @@ class ImportTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'import_' + Task._get_task_date()
+ self.cn = 'import_' + Task.get_timestamp()
dn = "cn=%s,%s" % (self.cn, DN_IMPORT_TASK)
self._properties = None
@@ -388,7 +388,7 @@ class ExportTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'export_' + Task._get_task_date()
+ self.cn = 'export_' + Task.get_timestamp()
dn = "cn=%s,%s" % (self.cn, DN_EXPORT_TASK)
self._properties = None
@@ -411,7 +411,7 @@ class BackupTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'backup_' + Task._get_task_date()
+ self.cn = 'backup_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=backup," + DN_TASKS
self._properties = None
@@ -426,7 +426,7 @@ class RestoreTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'restore_' + Task._get_task_date()
+ self.cn = 'restore_' + Task.get_timestamp()
dn = "cn=" + self.cn + ",cn=restore," + DN_TASKS
self._properties = None
@@ -513,7 +513,7 @@ class Tasks(object):
raise ValueError("Import file (%s) does not exist" % input_file)
# Prepare the task entry
- cn = "import_" + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = "import_" + Task.get_timestamp()
dn = "cn=%s,%s" % (cn, DN_IMPORT_TASK)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -581,7 +581,7 @@ class Tasks(object):
raise ValueError("output_file is mandatory")
# Prepare the task entry
- cn = "export_" + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = "export_" + Task.get_timestamp()
dn = "cn=%s,%s" % (cn, DN_EXPORT_TASK)
entry = Entry(dn)
entry.update({
@@ -637,7 +637,7 @@ class Tasks(object):
raise ValueError("You must specify a backup directory.")
# build the task entry
- cn = "backup_" + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = "backup_" + Task.get_timestamp()
dn = "cn=%s,%s" % (cn, DN_BACKUP_TASK)
entry = Entry(dn)
entry.update({
@@ -694,7 +694,7 @@ class Tasks(object):
raise ValueError("Backup file (%s) does not exist" % backup_dir)
# build the task entry
- cn = "restore_" + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = "restore_" + Task.get_timestamp()
dn = "cn=%s,%s" % (cn, DN_RESTORE_TASK)
entry = Entry(dn)
entry.update({
@@ -789,7 +789,7 @@ class Tasks(object):
attrs.append(attr)
else:
attrs.append(attrname)
- cn = "index_vlv_%s" % (time.strftime("%m%d%Y_%H%M%S", time.localtime()))
+ cn = "index_vlv_%s" % (Task.get_timestamp())
dn = "cn=%s,%s" % (cn, DN_INDEX_TASK)
entry = Entry(dn)
entry.update({
@@ -803,7 +803,7 @@ class Tasks(object):
#
# Reindex all attributes - gather them first...
#
- cn = "index_all_%s" % (time.strftime("%m%d%Y_%H%M%S", time.localtime()))
+ cn = "index_all_%s" % (Task.get_timestamp())
dn = ('cn=%s,cn=ldbm database,cn=plugins,cn=config' % backend)
try:
indexes = self.conn.search_s(dn, ldap.SCOPE_SUBTREE, '(objectclass=nsIndex)')
@@ -815,7 +815,7 @@ class Tasks(object):
#
# Reindex specific attributes
#
- cn = "index_attrs_%s" % (time.strftime("%m%d%Y_%H%M%S", time.localtime()))
+ cn = "index_attrs_%s" % (Task.get_timestamp())
if isinstance(attrname, (tuple, list)):
# Need to guarantee this is a list (and not a tuple)
for attr in attrname:
@@ -903,8 +903,7 @@ class Tasks(object):
suffix = ents[0].getValue(attr)
- cn = "fixupmemberof_" + time.strftime("%m%d%Y_%H%M%S",
- time.localtime())
+ cn = "fixupmemberof_" + Task.get_timestamp()
dn = "cn=%s,%s" % (cn, DN_MBO_TASK)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -965,8 +964,7 @@ class Tasks(object):
if len(ents) != 1:
raise ValueError("invalid backend name: %s" % bename)
- cn = "fixupTombstone_" + time.strftime("%m%d%Y_%H%M%S",
- time.localtime())
+ cn = "fixupTombstone_" + Task.get_timestamp()
dn = "cn=%s,%s" % (cn, DN_TOMB_FIXUP_TASK)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1019,7 +1017,7 @@ class Tasks(object):
@return exit code
'''
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=automember rebuild membership,cn=tasks,cn=config' % cn)
entry = Entry(dn)
@@ -1077,7 +1075,7 @@ class Tasks(object):
if not ldif_out:
raise ValueError("Missing ldif_out")
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=automember export updates,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1129,7 +1127,7 @@ class Tasks(object):
if not ldif_out or not ldif_in:
raise ValueError("Missing ldif_out and/or ldif_in")
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=automember map updates,cn=tasks,cn=config' % cn)
entry = Entry(dn)
@@ -1175,7 +1173,7 @@ class Tasks(object):
@return exit code
'''
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=fixup linked attributes,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1219,7 +1217,7 @@ class Tasks(object):
@return exit code
'''
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=schema reload task,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1264,7 +1262,7 @@ class Tasks(object):
@return exit code
'''
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=memberuid task,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1311,7 +1309,7 @@ class Tasks(object):
@return exit code
'''
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=syntax validate,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1358,7 +1356,7 @@ class Tasks(object):
@return exit code
'''
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=USN tombstone cleanup task,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1413,7 +1411,7 @@ class Tasks(object):
if not configfile:
raise ValueError("Missing required paramter: configfile")
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=sysconfig reload,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1464,7 +1462,7 @@ class Tasks(object):
if not suffix:
raise ValueError("Missing required paramter: suffix")
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=cleanallruv,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1516,7 +1514,7 @@ class Tasks(object):
if not suffix:
raise ValueError("Missing required paramter: suffix")
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=abort cleanallruv,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1571,7 +1569,7 @@ class Tasks(object):
if not nsArchiveDir:
raise ValueError("Missing required paramter: nsArchiveDir")
- cn = 'task-' + time.strftime("%m%d%Y_%H%M%S", time.localtime())
+ cn = 'task-' + Task.get_timestamp()
dn = ('cn=%s,cn=upgradedb,cn=tasks,cn=config' % cn)
entry = Entry(dn)
entry.setValues('objectclass', 'top', 'extensibleObject')
@@ -1616,6 +1614,6 @@ class LDAPIMappingReloadTask(Task):
"""
def __init__(self, instance, dn=None):
- self.cn = 'reload-' + Task._get_task_date()
+ self.cn = 'reload-' + Task.get_timestamp()
dn = f'cn={self.cn},cn=reload ldapi mappings,cn=tasks,cn=config'
super(LDAPIMappingReloadTask, self).__init__(instance, dn)
--
2.48.0

View file

@ -1,165 +0,0 @@
From 2b1b2db90c9d337166fa28e313f60828cd43de09 Mon Sep 17 00:00:00 2001
From: tbordaz <tbordaz@redhat.com>
Date: Thu, 6 Feb 2025 18:25:36 +0100
Subject: [PATCH] Issue 6554 - During import of entries without nsUniqueId, a
supplier generates duplicate nsUniqueId (LMDB only) (#6582)
Bug description:
During an import the entry is prepared (schema, operational
attributes, password encryption,...) before starting the
update of the database and indexes.
A step of the preparation is to assign a value to 'nsuniqueid'
operational attribute. 'nsuniqueid' must be unique.
In LMDB the preparation is done by multiple threads (workers).
In such case the 'nsuniqueid' are generated in parallel and
as it is time based several values can be duplicated.
Fix description:
To prevent that the routine dbmdb_import_generate_uniqueid
should make sure to synchronize the workers.
fixes: #6554
Reviewed by: Pierre Rogier
---
.../tests/suites/import/import_test.py | 79 ++++++++++++++++++-
.../back-ldbm/db-mdb/mdb_import_threads.c | 11 +++
2 files changed, 89 insertions(+), 1 deletion(-)
diff --git a/dirsrvtests/tests/suites/import/import_test.py b/dirsrvtests/tests/suites/import/import_test.py
index dbd921924..54d304753 100644
--- a/dirsrvtests/tests/suites/import/import_test.py
+++ b/dirsrvtests/tests/suites/import/import_test.py
@@ -14,11 +14,13 @@ import os
import pytest
import time
import glob
+import re
import logging
import subprocess
from datetime import datetime
from lib389.topologies import topology_st as topo
-from lib389._constants import DEFAULT_SUFFIX, TaskWarning
+from lib389.topologies import topology_m2 as topo_m2
+from lib389._constants import DEFAULT_BENAME, DEFAULT_SUFFIX, TaskWarning
from lib389.dbgen import dbgen_users
from lib389.tasks import ImportTask
from lib389.index import Indexes
@@ -688,6 +690,81 @@ def test_online_import_under_load(topo):
assert import_task.get_exit_code() == 0
+def test_duplicate_nsuniqueid(topo_m2, request):
+ """Test that after an offline import all
+ nsuniqueid are different
+
+ :id: a2541677-a288-4633-bacf-4050cc56016d
+ :setup: MMR with 2 suppliers
+ :steps:
+ 1. stop the instance to do offline operations
+ 2. Generate a 5K users LDIF file
+ 3. Check that no uniqueid are present in the generated file
+ 4. import the generated LDIF
+ 5. export the database
+ 6. Check that that exported LDIF contains more than 5K nsuniqueid
+ 7. Check that there is no duplicate nsuniqued in exported LDIF
+ :expectedresults:
+ 1. Should succeeds
+ 2. Should succeeds
+ 3. Should succeeds
+ 4. Should succeeds
+ 5. Should succeeds
+ 6. Should succeeds
+ 7. Should succeeds
+ """
+ m1 = topo_m2.ms["supplier1"]
+
+ # Stop the instance
+ m1.stop()
+
+ # Generate a test ldif (5k entries)
+ log.info("Generating LDIF...")
+ ldif_dir = m1.get_ldif_dir()
+ import_ldif = ldif_dir + '/5k_users_import.ldif'
+ dbgen_users(m1, 5000, import_ldif, DEFAULT_SUFFIX)
+
+ # Check that the generated LDIF does not contain nsuniqueid
+ all_nsuniqueid = []
+ with open(import_ldif, 'r') as file:
+ for line in file:
+ if line.lower().startswith("nsuniqueid: "):
+ all_nsuniqueid.append(line.split(': ')[1])
+ log.info("import file contains " + str(len(all_nsuniqueid)) + " nsuniqueid")
+ assert len(all_nsuniqueid) == 0
+
+ # Import the "nsuniquied free" LDIF file
+ if not m1.ldif2db('userRoot', None, None, None, import_ldif):
+ assert False
+
+ # Export the DB that now should contain nsuniqueid
+ export_ldif = ldif_dir + '/5k_user_export.ldif'
+ log.info("export to file " + export_ldif)
+ m1.db2ldif(bename=DEFAULT_BENAME, suffixes=[DEFAULT_SUFFIX],
+ excludeSuffixes=None, repl_data=False,
+ outputfile=export_ldif, encrypt=False)
+
+ # Check that the export LDIF contain nsuniqueid
+ all_nsuniqueid = []
+ with open(export_ldif, 'r') as file:
+ for line in file:
+ if line.lower().startswith("nsuniqueid: "):
+ all_nsuniqueid.append(line.split(': ')[1])
+ log.info("export file " + export_ldif + " contains " + str(len(all_nsuniqueid)) + " nsuniqueid")
+ assert len(all_nsuniqueid) >= 5000
+
+ # Check that the nsuniqueid are unique
+ assert len(set(all_nsuniqueid)) == len(all_nsuniqueid)
+
+ def fin():
+ if os.path.exists(import_ldif):
+ os.remove(import_ldif)
+ if os.path.exists(export_ldif):
+ os.remove(export_ldif)
+ m1.start
+
+ request.addfinalizer(fin)
+
if __name__ == '__main__':
# Run isolated
# -s for DEBUG mode
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c
index 707a110c5..0f445bb56 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c
@@ -610,10 +610,20 @@ dbmdb_import_generate_uniqueid(ImportJob *job, Slapi_Entry *e)
{
const char *uniqueid = slapi_entry_get_uniqueid(e);
int rc = UID_SUCCESS;
+ static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
if (!uniqueid && (job->uuid_gen_type != SLAPI_UNIQUEID_GENERATE_NONE)) {
char *newuniqueid;
+ /* With 'mdb' we have several workers generating nsuniqueid
+ * we need to serialize them to prevent generating duplicate value
+ * From performance pov it only impacts import
+ * The default value is SLAPI_UNIQUEID_GENERATE_TIME_BASED so
+ * the only syscall is clock_gettime and then string formating
+ * that should limit contention
+ */
+ pthread_mutex_lock(&mutex);
+
/* generate id based on dn */
if (job->uuid_gen_type == SLAPI_UNIQUEID_GENERATE_NAME_BASED) {
char *dn = slapi_entry_get_dn(e);
@@ -624,6 +634,7 @@ dbmdb_import_generate_uniqueid(ImportJob *job, Slapi_Entry *e)
/* time based */
rc = slapi_uniqueIDGenerateString(&newuniqueid);
}
+ pthread_mutex_unlock(&mutex);
if (rc == UID_SUCCESS) {
slapi_entry_set_uniqueid(e, newuniqueid);
--
2.48.0

View file

@ -1,4 +1,4 @@
For detailed information on developing plugins for
389 Directory Server visit.
For detailed information on developing plugins for 389 Directory Server visit
http://port389/wiki/Plugins
https://www.port389.org/docs/389ds/design/plugins.html
https://github.com/389ds/389-ds-base/blob/main/src/slapi_r_plugin/README.md

View file

@ -10,7 +10,11 @@ ExcludeArch: i686
%global __provides_exclude ^libjemalloc\\.so.*$
%endif
%bcond bundle_libdb %{defined rhel}
%bcond bundle_libdb 0
%if 0%{?rhel} >= 10
%bcond bundle_libdb 1
%endif
%if %{with bundle_libdb}
%global libdb_version 5.3
%global libdb_base_version db-%{libdb_version}.28
@ -24,6 +28,11 @@ ExcludeArch: i686
%endif
%endif
%bcond libbdb_ro 0
%if 0%{?fedora} >= 43
%bcond libbdb_ro 1
%endif
# This is used in certain builds to help us know if it has extra features.
%global variant base
# This enables a sanitized build.
@ -66,9 +75,9 @@ ExcludeArch: i686
Summary: 389 Directory Server (%{variant})
Name: 389-ds-base
Version: 3.0.6
Version: 3.2.0
Release: %{autorelease -n %{?with_asan:-e asan}}%{?dist}
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSD-2-Clause OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0 AND Zlib
URL: https://www.port389.org
Obsoletes: %{name}-legacy-tools < 1.4.4.6
Obsoletes: %{name}-legacy-tools-debuginfo < 1.4.4.6
@ -76,89 +85,81 @@ Provides: ldif2ldbm >= 0
##### Bundled cargo crates list - START #####
Provides: bundled(crate(addr2line)) = 0.24.2
Provides: bundled(crate(adler2)) = 2.0.0
Provides: bundled(crate(ahash)) = 0.7.8
Provides: bundled(crate(adler2)) = 2.0.1
Provides: bundled(crate(allocator-api2)) = 0.2.21
Provides: bundled(crate(atty)) = 0.2.14
Provides: bundled(crate(autocfg)) = 1.4.0
Provides: bundled(crate(backtrace)) = 0.3.74
Provides: bundled(crate(autocfg)) = 1.5.0
Provides: bundled(crate(backtrace)) = 0.3.75
Provides: bundled(crate(base64)) = 0.13.1
Provides: bundled(crate(bitflags)) = 2.8.0
Provides: bundled(crate(bitflags)) = 2.9.1
Provides: bundled(crate(byteorder)) = 1.5.0
Provides: bundled(crate(cbindgen)) = 0.26.0
Provides: bundled(crate(cc)) = 1.2.10
Provides: bundled(crate(cfg-if)) = 1.0.0
Provides: bundled(crate(cc)) = 1.2.27
Provides: bundled(crate(cfg-if)) = 1.0.1
Provides: bundled(crate(clap)) = 3.2.25
Provides: bundled(crate(clap_lex)) = 0.2.4
Provides: bundled(crate(concread)) = 0.2.21
Provides: bundled(crate(crossbeam)) = 0.8.4
Provides: bundled(crate(crossbeam-channel)) = 0.5.14
Provides: bundled(crate(crossbeam-deque)) = 0.8.6
Provides: bundled(crate(concread)) = 0.5.6
Provides: bundled(crate(crossbeam-epoch)) = 0.9.18
Provides: bundled(crate(crossbeam-queue)) = 0.3.12
Provides: bundled(crate(crossbeam-utils)) = 0.8.21
Provides: bundled(crate(errno)) = 0.3.10
Provides: bundled(crate(equivalent)) = 1.0.2
Provides: bundled(crate(errno)) = 0.3.12
Provides: bundled(crate(fastrand)) = 2.3.0
Provides: bundled(crate(fernet)) = 0.1.4
Provides: bundled(crate(foldhash)) = 0.1.5
Provides: bundled(crate(foreign-types)) = 0.3.2
Provides: bundled(crate(foreign-types-shared)) = 0.1.1
Provides: bundled(crate(getrandom)) = 0.2.15
Provides: bundled(crate(getrandom)) = 0.3.3
Provides: bundled(crate(gimli)) = 0.31.1
Provides: bundled(crate(hashbrown)) = 0.12.3
Provides: bundled(crate(hashbrown)) = 0.15.4
Provides: bundled(crate(heck)) = 0.4.1
Provides: bundled(crate(hermit-abi)) = 0.1.19
Provides: bundled(crate(indexmap)) = 1.9.3
Provides: bundled(crate(instant)) = 0.1.13
Provides: bundled(crate(itoa)) = 1.0.14
Provides: bundled(crate(jobserver)) = 0.1.32
Provides: bundled(crate(libc)) = 0.2.169
Provides: bundled(crate(linux-raw-sys)) = 0.4.15
Provides: bundled(crate(lock_api)) = 0.4.12
Provides: bundled(crate(log)) = 0.4.25
Provides: bundled(crate(lru)) = 0.7.8
Provides: bundled(crate(memchr)) = 2.7.4
Provides: bundled(crate(miniz_oxide)) = 0.8.3
Provides: bundled(crate(itoa)) = 1.0.15
Provides: bundled(crate(jobserver)) = 0.1.33
Provides: bundled(crate(libc)) = 0.2.174
Provides: bundled(crate(linux-raw-sys)) = 0.9.4
Provides: bundled(crate(log)) = 0.4.27
Provides: bundled(crate(lru)) = 0.13.0
Provides: bundled(crate(memchr)) = 2.7.5
Provides: bundled(crate(miniz_oxide)) = 0.8.9
Provides: bundled(crate(object)) = 0.36.7
Provides: bundled(crate(once_cell)) = 1.20.2
Provides: bundled(crate(openssl)) = 0.10.68
Provides: bundled(crate(once_cell)) = 1.21.3
Provides: bundled(crate(openssl)) = 0.10.73
Provides: bundled(crate(openssl-macros)) = 0.1.1
Provides: bundled(crate(openssl-sys)) = 0.9.104
Provides: bundled(crate(openssl-sys)) = 0.9.109
Provides: bundled(crate(os_str_bytes)) = 6.6.1
Provides: bundled(crate(parking_lot)) = 0.11.2
Provides: bundled(crate(parking_lot_core)) = 0.8.6
Provides: bundled(crate(paste)) = 0.1.18
Provides: bundled(crate(paste-impl)) = 0.1.18
Provides: bundled(crate(pin-project-lite)) = 0.2.16
Provides: bundled(crate(pkg-config)) = 0.3.31
Provides: bundled(crate(ppv-lite86)) = 0.2.20
Provides: bundled(crate(pkg-config)) = 0.3.32
Provides: bundled(crate(proc-macro-hack)) = 0.5.20+deprecated
Provides: bundled(crate(proc-macro2)) = 1.0.93
Provides: bundled(crate(quote)) = 1.0.38
Provides: bundled(crate(rand)) = 0.8.5
Provides: bundled(crate(rand_chacha)) = 0.3.1
Provides: bundled(crate(rand_core)) = 0.6.4
Provides: bundled(crate(redox_syscall)) = 0.2.16
Provides: bundled(crate(rustc-demangle)) = 0.1.24
Provides: bundled(crate(rustix)) = 0.38.44
Provides: bundled(crate(ryu)) = 1.0.18
Provides: bundled(crate(scopeguard)) = 1.2.0
Provides: bundled(crate(serde)) = 1.0.217
Provides: bundled(crate(serde_derive)) = 1.0.217
Provides: bundled(crate(serde_json)) = 1.0.137
Provides: bundled(crate(proc-macro2)) = 1.0.95
Provides: bundled(crate(quote)) = 1.0.40
Provides: bundled(crate(r-efi)) = 5.3.0
Provides: bundled(crate(rustc-demangle)) = 0.1.25
Provides: bundled(crate(rustix)) = 1.0.7
Provides: bundled(crate(ryu)) = 1.0.20
Provides: bundled(crate(serde)) = 1.0.219
Provides: bundled(crate(serde_derive)) = 1.0.219
Provides: bundled(crate(serde_json)) = 1.0.140
Provides: bundled(crate(shlex)) = 1.3.0
Provides: bundled(crate(smallvec)) = 1.13.2
Provides: bundled(crate(smallvec)) = 1.15.1
Provides: bundled(crate(sptr)) = 0.3.2
Provides: bundled(crate(strsim)) = 0.10.0
Provides: bundled(crate(syn)) = 2.0.96
Provides: bundled(crate(tempfile)) = 3.15.0
Provides: bundled(crate(syn)) = 2.0.103
Provides: bundled(crate(tempfile)) = 3.20.0
Provides: bundled(crate(termcolor)) = 1.4.1
Provides: bundled(crate(textwrap)) = 0.16.1
Provides: bundled(crate(tokio)) = 1.43.0
Provides: bundled(crate(tokio-macros)) = 2.5.0
Provides: bundled(crate(textwrap)) = 0.16.2
Provides: bundled(crate(tokio)) = 1.45.1
Provides: bundled(crate(toml)) = 0.5.11
Provides: bundled(crate(unicode-ident)) = 1.0.15
Provides: bundled(crate(tracing)) = 0.1.41
Provides: bundled(crate(tracing-attributes)) = 0.1.30
Provides: bundled(crate(tracing-core)) = 0.1.34
Provides: bundled(crate(unicode-ident)) = 1.0.18
Provides: bundled(crate(uuid)) = 0.8.2
Provides: bundled(crate(vcpkg)) = 0.2.15
Provides: bundled(crate(version_check)) = 0.9.5
Provides: bundled(crate(wasi)) = 0.11.0+wasi_snapshot_preview1
Provides: bundled(crate(wasi)) = 0.14.2+wasi_0.2.4
Provides: bundled(crate(winapi)) = 0.3.9
Provides: bundled(crate(winapi-i686-pc-windows-gnu)) = 0.4.0
Provides: bundled(crate(winapi-util)) = 0.1.9
@ -173,8 +174,7 @@ Provides: bundled(crate(windows_i686_msvc)) = 0.52.6
Provides: bundled(crate(windows_x86_64_gnu)) = 0.52.6
Provides: bundled(crate(windows_x86_64_gnullvm)) = 0.52.6
Provides: bundled(crate(windows_x86_64_msvc)) = 0.52.6
Provides: bundled(crate(zerocopy)) = 0.7.35
Provides: bundled(crate(zerocopy-derive)) = 0.7.35
Provides: bundled(crate(wit-bindgen-rt)) = 0.39.0
Provides: bundled(crate(zeroize)) = 1.8.1
Provides: bundled(crate(zeroize_derive)) = 1.4.2
Provides: bundled(npm(@eslint-community/eslint-utils)) = 4.4.1
@ -195,6 +195,7 @@ Provides: bundled(npm(@patternfly/patternfly)) = 5.4.1
Provides: bundled(npm(@patternfly/react-charts)) = 7.4.3
Provides: bundled(npm(@patternfly/react-core)) = 5.4.1
Provides: bundled(npm(@patternfly/react-icons)) = 5.4.0
Provides: bundled(npm(@patternfly/react-log-viewer)) = 5.3.0
Provides: bundled(npm(@patternfly/react-styles)) = 5.4.0
Provides: bundled(npm(@patternfly/react-table)) = 5.4.1
Provides: bundled(npm(@patternfly/react-tokens)) = 5.4.0
@ -204,10 +205,10 @@ Provides: bundled(npm(@types/d3-ease)) = 3.0.2
Provides: bundled(npm(@types/d3-interpolate)) = 3.0.4
Provides: bundled(npm(@types/d3-path)) = 3.1.0
Provides: bundled(npm(@types/d3-scale)) = 4.0.8
Provides: bundled(npm(@types/d3-shape)) = 3.1.7
Provides: bundled(npm(@types/d3-time)) = 3.0.4
Provides: bundled(npm(@types/d3-shape)) = 3.1.6
Provides: bundled(npm(@types/d3-time)) = 3.0.3
Provides: bundled(npm(@types/d3-timer)) = 3.0.2
Provides: bundled(npm(@ungap/structured-clone)) = 1.2.1
Provides: bundled(npm(@ungap/structured-clone)) = 1.2.0
Provides: bundled(npm(@xterm/addon-canvas)) = 0.7.0
Provides: bundled(npm(@xterm/xterm)) = 5.5.0
Provides: bundled(npm(acorn)) = 8.14.0
@ -216,10 +217,10 @@ Provides: bundled(npm(ajv)) = 6.12.6
Provides: bundled(npm(ansi-regex)) = 5.0.1
Provides: bundled(npm(ansi-styles)) = 4.3.0
Provides: bundled(npm(argparse)) = 2.0.1
Provides: bundled(npm(attr-accept)) = 2.2.5
Provides: bundled(npm(attr-accept)) = 2.2.4
Provides: bundled(npm(autolinker)) = 3.16.2
Provides: bundled(npm(balanced-match)) = 1.0.2
Provides: bundled(npm(brace-expansion)) = 1.1.11
Provides: bundled(npm(brace-expansion)) = 1.1.12
Provides: bundled(npm(callsites)) = 3.1.0
Provides: bundled(npm(chalk)) = 4.1.2
Provides: bundled(npm(color-convert)) = 2.0.1
@ -238,7 +239,7 @@ Provides: bundled(npm(d3-shape)) = 3.2.0
Provides: bundled(npm(d3-time)) = 3.1.0
Provides: bundled(npm(d3-time-format)) = 4.1.0
Provides: bundled(npm(d3-timer)) = 3.0.1
Provides: bundled(npm(debug)) = 4.4.0
Provides: bundled(npm(debug)) = 4.3.7
Provides: bundled(npm(deep-is)) = 0.1.4
Provides: bundled(npm(delaunator)) = 4.0.1
Provides: bundled(npm(delaunay-find)) = 0.0.6
@ -258,12 +259,12 @@ Provides: bundled(npm(esutils)) = 2.0.3
Provides: bundled(npm(fast-deep-equal)) = 3.1.3
Provides: bundled(npm(fast-json-stable-stringify)) = 2.1.0
Provides: bundled(npm(fast-levenshtein)) = 2.0.6
Provides: bundled(npm(fastq)) = 1.18.0
Provides: bundled(npm(fastq)) = 1.17.1
Provides: bundled(npm(file-entry-cache)) = 6.0.1
Provides: bundled(npm(file-selector)) = 2.1.2
Provides: bundled(npm(file-selector)) = 2.1.0
Provides: bundled(npm(find-up)) = 5.0.0
Provides: bundled(npm(flat-cache)) = 3.2.0
Provides: bundled(npm(flatted)) = 3.3.2
Provides: bundled(npm(flatted)) = 3.3.1
Provides: bundled(npm(focus-trap)) = 7.5.4
Provides: bundled(npm(fs.realpath)) = 1.0.0
Provides: bundled(npm(gettext-parser)) = 2.1.0
@ -288,7 +289,7 @@ Provides: bundled(npm(isexe)) = 2.0.0
Provides: bundled(npm(js-sha1)) = 0.7.0
Provides: bundled(npm(js-sha256)) = 0.11.0
Provides: bundled(npm(js-tokens)) = 4.0.0
Provides: bundled(npm(js-yaml)) = 4.1.0
Provides: bundled(npm(js-yaml)) = 4.1.1
Provides: bundled(npm(json-buffer)) = 3.0.1
Provides: bundled(npm(json-schema-traverse)) = 0.4.1
Provides: bundled(npm(json-stable-stringify-without-jsonify)) = 1.0.1
@ -299,6 +300,7 @@ Provides: bundled(npm(locate-path)) = 6.0.0
Provides: bundled(npm(lodash)) = 4.17.21
Provides: bundled(npm(lodash.merge)) = 4.6.2
Provides: bundled(npm(loose-envify)) = 1.4.0
Provides: bundled(npm(memoize-one)) = 5.2.1
Provides: bundled(npm(minimatch)) = 3.1.2
Provides: bundled(npm(ms)) = 2.1.3
Provides: bundled(npm(natural-compare)) = 1.4.0
@ -312,7 +314,7 @@ Provides: bundled(npm(path-exists)) = 4.0.0
Provides: bundled(npm(path-is-absolute)) = 1.0.1
Provides: bundled(npm(path-key)) = 3.1.1
Provides: bundled(npm(prelude-ls)) = 1.2.1
Provides: bundled(npm(prettier)) = 3.4.2
Provides: bundled(npm(prettier)) = 3.3.3
Provides: bundled(npm(process-nextick-args)) = 2.0.1
Provides: bundled(npm(prop-types)) = 15.8.1
Provides: bundled(npm(punycode)) = 2.3.1
@ -347,28 +349,28 @@ Provides: bundled(npm(type-fest)) = 0.20.2
Provides: bundled(npm(uri-js)) = 4.4.1
Provides: bundled(npm(util-deprecate)) = 1.0.2
Provides: bundled(npm(uuid)) = 10.0.0
Provides: bundled(npm(victory-area)) = 37.3.5
Provides: bundled(npm(victory-axis)) = 37.3.5
Provides: bundled(npm(victory-bar)) = 37.3.5
Provides: bundled(npm(victory-box-plot)) = 37.3.5
Provides: bundled(npm(victory-brush-container)) = 37.3.5
Provides: bundled(npm(victory-chart)) = 37.3.5
Provides: bundled(npm(victory-core)) = 37.3.5
Provides: bundled(npm(victory-create-container)) = 37.3.5
Provides: bundled(npm(victory-cursor-container)) = 37.3.5
Provides: bundled(npm(victory-group)) = 37.3.5
Provides: bundled(npm(victory-legend)) = 37.3.5
Provides: bundled(npm(victory-line)) = 37.3.5
Provides: bundled(npm(victory-pie)) = 37.3.5
Provides: bundled(npm(victory-polar-axis)) = 37.3.5
Provides: bundled(npm(victory-scatter)) = 37.3.5
Provides: bundled(npm(victory-selection-container)) = 37.3.5
Provides: bundled(npm(victory-shared-events)) = 37.3.5
Provides: bundled(npm(victory-stack)) = 37.3.5
Provides: bundled(npm(victory-tooltip)) = 37.3.5
Provides: bundled(npm(victory-vendor)) = 37.3.5
Provides: bundled(npm(victory-voronoi-container)) = 37.3.5
Provides: bundled(npm(victory-zoom-container)) = 37.3.5
Provides: bundled(npm(victory-area)) = 37.3.1
Provides: bundled(npm(victory-axis)) = 37.3.1
Provides: bundled(npm(victory-bar)) = 37.3.1
Provides: bundled(npm(victory-box-plot)) = 37.3.1
Provides: bundled(npm(victory-brush-container)) = 37.3.1
Provides: bundled(npm(victory-chart)) = 37.3.1
Provides: bundled(npm(victory-core)) = 37.3.1
Provides: bundled(npm(victory-create-container)) = 37.3.1
Provides: bundled(npm(victory-cursor-container)) = 37.3.1
Provides: bundled(npm(victory-group)) = 37.3.1
Provides: bundled(npm(victory-legend)) = 37.3.1
Provides: bundled(npm(victory-line)) = 37.3.1
Provides: bundled(npm(victory-pie)) = 37.3.1
Provides: bundled(npm(victory-polar-axis)) = 37.3.1
Provides: bundled(npm(victory-scatter)) = 37.3.1
Provides: bundled(npm(victory-selection-container)) = 37.3.1
Provides: bundled(npm(victory-shared-events)) = 37.3.1
Provides: bundled(npm(victory-stack)) = 37.3.1
Provides: bundled(npm(victory-tooltip)) = 37.3.1
Provides: bundled(npm(victory-vendor)) = 37.3.1
Provides: bundled(npm(victory-voronoi-container)) = 37.3.1
Provides: bundled(npm(victory-zoom-container)) = 37.3.1
Provides: bundled(npm(which)) = 2.0.2
Provides: bundled(npm(word-wrap)) = 1.2.5
Provides: bundled(npm(wrappy)) = 1.0.2
@ -387,6 +389,7 @@ BuildRequires: libicu-devel
BuildRequires: pcre2-devel
BuildRequires: cracklib-devel
BuildRequires: json-c-devel
BuildRequires: libxcrypt-devel
%if %{with clang}
BuildRequires: libatomic
BuildRequires: clang
@ -405,9 +408,11 @@ BuildRequires: libtsan
BuildRequires: libubsan
%endif
%endif
%if %{without libbdb_ro}
%if %{without bundle_libdb}
BuildRequires: libdb-devel
%endif
%endif
# The following are needed to build the snmp ldap-agent
BuildRequires: net-snmp-devel
@ -434,18 +439,7 @@ BuildRequires: doxygen
# For tests!
BuildRequires: libcmocka-devel
# For lib389 and related components.
BuildRequires: python%{python3_pkgversion}
BuildRequires: python%{python3_pkgversion}-devel
BuildRequires: python%{python3_pkgversion}-setuptools
BuildRequires: python%{python3_pkgversion}-ldap
BuildRequires: python%{python3_pkgversion}-pyasn1
BuildRequires: python%{python3_pkgversion}-pyasn1-modules
BuildRequires: python%{python3_pkgversion}-dateutil
BuildRequires: python%{python3_pkgversion}-argcomplete
BuildRequires: python%{python3_pkgversion}-argparse-manpage
BuildRequires: python%{python3_pkgversion}-policycoreutils
BuildRequires: python%{python3_pkgversion}-libselinux
BuildRequires: python%{python3_pkgversion}-cryptography
# For cockpit
%if %{with cockpit}
@ -454,6 +448,9 @@ BuildRequires: npm
BuildRequires: nodejs
%endif
# For autosetup -S git
BuildRequires: git
Requires: %{name}-libs = %{version}-%{release}
Requires: python%{python3_pkgversion}-lib389 = %{version}-%{release}
@ -474,14 +471,20 @@ Requires: cyrus-sasl-md5
# This is optionally supported by us, as we use it in our tests
Requires: cyrus-sasl-plain
# this is needed for backldbm
%if %{with libbdb_ro}
Requires: %{name}-robdb-libs = %{version}-%{release}
%else
%if %{without bundle_libdb}
Requires: libdb
%endif
%endif
Requires: lmdb-libs
# Needed by logconv.pl
%if %{without libbdb_ro}
%if %{without bundle_libdb}
Requires: perl-DB_File
%endif
%endif
Requires: perl-Archive-Tar
%if 0%{?fedora} >= 33 || 0%{?rhel} >= 9
Requires: perl-debugger
@ -497,24 +500,22 @@ Requires: python3-file-magic
# Picks up our systemd deps.
%{?systemd_requires}
Source0: %{name}-%{version}.tar.bz2
Source0: https://github.com/389ds/%{name}/releases/download/%{name}-%{version}/%{name}-%{version}.tar.bz2
Source2: %{name}-devel.README
%if %{with bundle_jemalloc}
Source3: https://github.com/jemalloc/%{jemalloc_name}/releases/download/%{jemalloc_ver}/%{jemalloc_name}-%{jemalloc_ver}.tar.bz2
Source6: jemalloc-5.3.0_throw_bad_alloc.patch
%endif
Source4: 389-ds-base.sysusers
%if %{with bundle_libdb}
Source5: https://fedorapeople.org/groups/389ds/libdb-5.3.28-59.tar.bz2
%endif
Patch: 0001-Issue-6544-logconv.py-python3-magic-conflicts-with-p.patch
Patch: 0002-Issue-6374-nsslapd-mdb-max-dbs-autotuning-doesn-t-wo.patch
Patch: 0003-Issue-6090-Fix-dbscan-options-and-man-pages-6315.patch
Patch: 0004-Issue-6489-After-log-rotation-refresh-the-FD-pointer.patch
Patch: 0005-Issue-6436-MOD-on-a-large-group-slow-if-substring-in.patch
Patch: 0006-Issue-6566-RI-plugin-failure-to-handle-a-modrdn-for-.patch
Patch: 0007-Issue-6229-After-an-initial-failure-subsequent-onlin.patch
Patch: 0008-Issue-6554-During-import-of-entries-without-nsUnique.patch
Patch: 0001-Issue-7096-During-replication-online-total-init-the-.patch
Patch: 0002-Issue-Revise-paged-result-search-locking.patch
Patch: 0003-Issue-7108-Fix-shutdown-crash-in-entry-cache-destruc.patch
Patch: 0004-Issue-7172-Index-ordering-mismatch-after-upgrade-717.patch
Patch: 0005-Issue-7172-2nd-Index-ordering-mismatch-after-upgrade.patch
%description
389 Directory Server is an LDAPv3 compliant server. The base package includes
@ -525,6 +526,17 @@ isn't what you want. Please contact support immediately.
Please see http://seclists.org/oss-sec/2016/q1/363 for more information.
%endif
%if %{with libbdb_ro}
%package robdb-libs
Summary: Read-only Berkeley Database Library
License: GPL-2.0-or-later OR LGPL-2.1-or-later
%description robdb-libs
The %{name}-robdb-lib package contains a library derived from rpm
project (https://github.com/rpm-software-management/rpm) that provides
some basic functions to search and read Berkeley Database records
%endif
%package libs
Summary: Core libraries for 389 Directory Server (%{variant})
@ -609,18 +621,8 @@ Requires: openssl
# This is for /usr/bin/c_rehash tool, only needed for openssl < 1.1.0
Requires: openssl-perl
Requires: iproute
Requires: python%{python3_pkgversion}
Requires: python%{python3_pkgversion}-distro
Requires: python%{python3_pkgversion}-ldap
Requires: python%{python3_pkgversion}-pyasn1
Requires: python%{python3_pkgversion}-pyasn1-modules
Requires: python%{python3_pkgversion}-dateutil
Requires: python%{python3_pkgversion}-argcomplete
Requires: python%{python3_pkgversion}-libselinux
Requires: python%{python3_pkgversion}-setuptools
Requires: python%{python3_pkgversion}-cryptography
Recommends: bash-completion
%{?python_provide:%python_provide python%{python3_pkgversion}-lib389}
%description -n python%{python3_pkgversion}-lib389
This module contains tools and libraries for accessing, testing,
@ -639,8 +641,14 @@ Requires: python%{python3_pkgversion}-lib389 = %{version}-%{release}
A cockpit UI Plugin for configuring and administering the 389 Directory Server
%endif
%generate_buildrequires
cd src/lib389
# Tests do not run in %%check (lib389's tests need to be fixed)
# but test dependencies are needed to check import lib389.topologies
%pyproject_buildrequires -g test
%prep
%autosetup -p1 -n %{name}-%{version}
%autosetup -S git -p1 -n %{name}-%{version}
%if %{with bundle_jemalloc}
%setup -q -n %{name}-%{version} -T -D -b 3
@ -653,6 +661,8 @@ A cockpit UI Plugin for configuring and administering the 389 Directory Server
cp %{SOURCE2} README.devel
%build
# Workaround until https://github.com/389ds/389-ds-base/issues/6476 is fixed
export CFLAGS="%{optflags} -std=gnu17"
%if %{with clang}
CLANG_FLAGS="--enable-clang"
@ -702,6 +712,7 @@ COCKPIT_FLAGS="--disable-cockpit"
# Build jemalloc
pushd ../%{jemalloc_name}-%{jemalloc_ver}
patch -p1 -F3 < %{SOURCE6}
%configure \
--libdir=%{_libdir}/%{pkgname}/lib \
--bindir=%{_libdir}/%{pkgname}/bin \
@ -716,6 +727,7 @@ mkdir -p ../%{libdb_base_version}
pushd ../%{libdb_base_version}
tar -xjf %{_topdir}/SOURCES/%{libdb_full_version}.tar.bz2
mv %{libdb_full_version} SOURCES
sed -i -e '/^CFLAGS=/s/-fno-strict-aliasing/& -std=gnu99/' %{_builddir}/%{name}-%{version}/rpm/bundle-libdb.spec
rpmbuild --define "_topdir $PWD" -bc %{_builddir}/%{name}-%{version}/rpm/bundle-libdb.spec
popd
%endif
@ -724,6 +736,11 @@ popd
autoreconf -fiv
%configure \
%if %{with libbdb_ro}
--with-libbdb-ro \
%else
--without-libbdb-ro \
%endif
%if %{with bundle_libdb}
--with-bundle-libdb=%{_builddir}/%{libdb_base_version}/BUILD/%{libdb_base_dir}/dist/dist-tls \
%endif
@ -745,16 +762,10 @@ autoreconf -fiv
%endif
# lib389
make src/lib389/setup.py
pushd ./src/lib389
%py3_build
%{python3} validate_version.py --update
%pyproject_wheel
popd
# argparse-manpage dynamic man pages have hardcoded man v1 in header,
# need to change it to v8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dsconf.8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dsctl.8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dsidm.8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dscreate.8
# Generate symbolic info for debuggers
export XCFLAGS=$RPM_OPT_FLAGS
@ -784,7 +795,8 @@ cp -r %{_builddir}/%{name}-%{version}/man/man3 $RPM_BUILD_ROOT/%{_mandir}/man3
# lib389
pushd src/lib389
%py3_install
%pyproject_install
%pyproject_save_files -l lib389
popd
# Register CLI tools for bash completion
@ -830,6 +842,21 @@ cp -pa $libdbbuilddir/dist/dist-tls/.libs/%{libdb_bundle_name} $RPM_BUILD_ROOT%{
popd
%endif
%if %{with libbdb_ro}
pushd lib/librobdb
cp -pa COPYING %{_builddir}/%{name}-%{version}/COPYING.librobdb
cp -pa COPYING.RPM %{_builddir}/%{name}-%{version}/COPYING.RPM
install -m 0755 -d %{buildroot}/%{_libdir}
install -m 0755 -d %{buildroot}/%{_docdir}/%{name}-robdb-libs
install -m 0755 -d %{buildroot}/%{_licensedir}/%{name}
install -m 0755 -d %{buildroot}/%{_licensedir}/%{name}-robdb-libs
install -m 0644 $PWD/README.md %{buildroot}/%{_docdir}/%{name}-robdb-libs/README.md
install -m 0644 $PWD/COPYING %{buildroot}/%{_licensedir}/%{name}-robdb-libs/COPYING
install -m 0644 $PWD/COPYING.RPM %{buildroot}/%{_licensedir}/%{name}-robdb-libs/COPYING.RPM
install -m 0644 $PWD/COPYING %{buildroot}/%{_licensedir}/%{name}/COPYING.librobdb
install -m 0644 $PWD/COPYING.RPM %{buildroot}/%{_licensedir}/%{name}/COPYING.RPM
popd
%endif
%check
# This checks the code, if it fails it prints why, then re-raises the fail to shortcircuit the rpm build.
@ -840,6 +867,9 @@ export TSAN_OPTIONS=print_stacktrace=1:second_deadlock_stack=1:history_size=7
if ! make DESTDIR="$RPM_BUILD_ROOT" check; then cat ./test-suite.log && false; fi
%endif
# Check import for lib389 modules
%pyproject_check_import -e '*.test*'
%post
if [ -n "$DEBUGPOSTTRANS" ] ; then
output=$DEBUGPOSTTRANS
@ -852,11 +882,6 @@ fi
# reload to pick up any changes to systemd files
/bin/systemctl daemon-reload >$output 2>&1 || :
# https://fedoraproject.org/wiki/Packaging:UsersAndGroups#Soft_static_allocation
# Soft static allocation for UID and GID
# sysusers.d format https://fedoraproject.org/wiki/Changes/Adopting_sysusers.d_format
%sysusers_create_compat %{SOURCE4}
# Reload our sysctl before we restart (if we can)
sysctl --system &> $output; true
@ -982,6 +1007,9 @@ exit 0
%exclude %{_libdir}/%{pkgname}/lib/libjemalloc_pic.a
%exclude %{_libdir}/%{pkgname}/lib/pkgconfig
%endif
%if %{with libbdb_ro}
%exclude %{_libdir}/%{pkgname}/librobdb.so
%endif
%files devel
%doc LICENSE LICENSE.GPLv3+ LICENSE.openssl README.devel
@ -1021,18 +1049,24 @@ exit 0
%{_libdir}/%{pkgname}/plugins/libback-bdb.so
%endif
%files -n python%{python3_pkgversion}-lib389
%doc LICENSE LICENSE.GPLv3+
%{python3_sitelib}/lib389*
%{_sbindir}/dsconf
%{_mandir}/man8/dsconf.8.gz
%{_sbindir}/dscreate
%{_mandir}/man8/dscreate.8.gz
%{_sbindir}/dsctl
%{_mandir}/man8/dsctl.8.gz
%{_sbindir}/dsidm
%{_mandir}/man8/dsidm.8.gz
%files -n python%{python3_pkgversion}-lib389 -f %{pyproject_files}
%doc src/lib389/README.md
%license LICENSE LICENSE.GPLv3+
# Binaries
%{_bindir}/dsconf
%{_bindir}/dscreate
%{_bindir}/dsctl
%{_bindir}/dsidm
%{_bindir}/openldap_to_ds
%{_libexecdir}/%{pkgname}/dscontainer
# Man pages
%{_mandir}/man8/dsconf.8.gz
%{_mandir}/man8/dscreate.8.gz
%{_mandir}/man8/dsctl.8.gz
%{_mandir}/man8/dsidm.8.gz
%{_mandir}/man8/openldap_to_ds.8.gz
%exclude %{_mandir}/man1
# Bash completions for scripts provided by python3-lib389
%{bash_completions_dir}/dsctl
%{bash_completions_dir}/dsconf
%{bash_completions_dir}/dscreate
@ -1044,5 +1078,16 @@ exit 0
%doc README.md
%endif
%if %{with libbdb_ro}
%files robdb-libs
%license COPYING.librobdb COPYING.RPM
%doc %{_defaultdocdir}/%{name}-robdb-libs/README.md
%{_libdir}/%{pkgname}/librobdb.so
%{_licensedir}/%{name}-robdb-libs/COPYING
%{_licensedir}/%{name}/COPYING.RPM
%{_licensedir}/%{name}/COPYING.librobdb
%endif
%changelog
%autochangelog

View file

@ -1,3 +1,18 @@
* Tue May 14 2024 James Chapman <jachapma@redhat.com> - 3.1.0-1
- Bump version to 3.1.0
- Issue 6142 - Fix CI tests (#6161)
- Issue 6157 - Cockipt crashes when getting replication status if topology contains an old 389ds version (#6158)
- Issue 5105 - lmdb - Cannot create entries with long rdn - fix covscan (#6131)
- Issue 6086 - Ambiguous warning about SELinux in dscreate for non-root user
- Issue 6094 - Add coverity scan workflow
- Issue 5962 - Rearrange includes for 32-bit support logic
- Issue 6046 - Make dscreate to work during kickstart installations
- Issue 6073 - Improve error log when running out of memory (#6084)
- Issue 6071 - Instance creation/removal is slow
- Issue 6010 - 389 ds ignores nsslapd-maxdescriptors (#6027)
- Issue 6075 - Ignore build artifacts (#6076)
- Issue 6068 - Add dscontainer stop function
* Mon Apr 15 2024 James Chapman <jachapma@redhat.com> - 3.0.2-1
- Bump version to 3.0.2
- Issue 6082 - Remove explicit dependencies toward libdb - revert default (#6145)

View file

@ -0,0 +1,41 @@
#commit 3de0c24859f4413bf03448249078169bb50bda0f
#Author: divanorama <divanorama@gmail.com>
#Date: Thu Sep 29 23:35:59 2022 +0200
#
# Disable builtin malloc in tests
#
# With `--with-jemalloc-prefix=` and without `-fno-builtin` or `-O1` both clang and gcc may optimize out `malloc` calls
# whose result is unused. Comparing result to NULL also doesn't necessarily count as being used.
#
# This won't be a problem in most client programs as this only concerns really unused pointers, but in
# tests it's important to actually execute allocations.
# `-fno-builtin` should disable this optimization for both gcc and clang, and applying it only to tests code shouldn't hopefully be an issue.
# Another alternative is to force "use" of result but that'd require more changes and may miss some other optimization-related issues.
#
# This should resolve https://github.com/jemalloc/jemalloc/issues/2091
#
#diff --git a/Makefile.in b/Makefile.in
#index 6809fb29..a964f07e 100644
#--- a/Makefile.in
#+++ b/Makefile.in
#@@ -458,6 +458,8 @@ $(TESTS_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.c
# $(TESTS_CPP_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.cpp
# $(TESTS_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include
# $(TESTS_CPP_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include
#+$(TESTS_OBJS): CFLAGS += -fno-builtin
#+$(TESTS_CPP_OBJS): CPPFLAGS += -fno-builtin
# ifneq ($(IMPORTLIB),$(SO))
# $(CPP_OBJS) $(C_SYM_OBJS) $(C_OBJS) $(C_JET_SYM_OBJS) $(C_JET_OBJS): CPPFLAGS += -DDLLEXPORT
# endif
diff --git a/src/jemalloc_cpp.cpp b/src/jemalloc_cpp.cpp
index fffd6aee..5a682991 100644
--- a/src/jemalloc_cpp.cpp
+++ b/src/jemalloc_cpp.cpp
@@ -93,7 +93,7 @@ handleOOM(std::size_t size, bool nothrow) {
}
if (ptr == nullptr && !nothrow)
- std::__throw_bad_alloc();
+ throw std::bad_alloc();
return ptr;
}

View file

@ -1,3 +1,3 @@
SHA512 (389-ds-base-3.0.6.tar.bz2) = 9091da9229f20446357fd713f4177a885faa3fb4f83fc7806bdd0590f959dd02e6ebb8bb4e573b19537efeb3cef96f43eedab565b98b2d155055ea579f09b474
SHA512 (jemalloc-5.3.0.tar.bz2) = 22907bb052096e2caffb6e4e23548aecc5cc9283dce476896a2b1127eee64170e3562fa2e7db9571298814a7a2c7df6e8d1fbe152bd3f3b0c1abec22a2de34b1
SHA512 (libdb-5.3.28-59.tar.bz2) = 731a434fa2e6487ebb05c458b0437456eb9f7991284beb08cb3e21931e23bdeddddbc95bfabe3a2f9f029fe69cd33a2d4f0f5ce6a9811e9c3b940cb6fde4bf79
SHA512 (389-ds-base-3.2.0.tar.bz2) = 9ff6aa56b30863c619f4f324344dca72cc883236bfe8d94520e8469d9e306f54b373ee2504eda18dcb0ecda33f915a3e64a6f3cdaa93a69b74d901caa48545e1