Compare commits

...
Sign in to create a new pull request.

6 commits

Author SHA1 Message Date
Viktor Ashirov
018b733e32 Fix broken FreeIPA upgrade and replication issues
- Issue 7172 - Index ordering mismatch after upgrade (#7180)
2026-01-12 12:22:51 +01:00
Viktor Ashirov
315334121b Correct License: string for robdb-libs 2026-01-09 19:42:11 +01:00
Viktor Ashirov
8c22d99174 Fix broken FreeIPA upgrade and replication issues
- Issue 7172 - Index ordering mismatch after upgrade (#7173)
- Issue 7108 - Fix shutdown crash in entry cache destruction (#7163)
- Issue - Revise paged result search locking
- Issue 7096 - During replication online total init the function idl_id_is_in_idlist is not scaling with large database (#7145)
- Issue 7160 - Add lib389 version sync check to configure (#7165)
- Issue 7166 - db_config_set asserts because of dynamic list (#7167)
- Sync lib389 version to 3.1.4 (#7161)
- Issue 7150 - Compressed access log rotations skipped, accesslog-list out of sync (#7151)

Use correct tarball from upstream.
2026-01-09 19:31:00 +01:00
Mark Reynolds
fcd35df640 Issue 7147 - entrycache_eviction_test is failing (#7148)
Issue 1793 - RFE - Dynamic lists - UI and CLI updates
Issue 7119 - Fix DNA shared config replication test (#7143)
Issue 7081 - Repl Log Analysis - Implement data sampling with performance and timezone fixes (#7086)
Issue 1793 - RFE - Implement dynamic lists
Issue 7112 - dsctrl dblib bdb2mdb core dumps and won't allow conversion (#7144)
Issue 7053 - Remove memberof_del_dn_from_groups from MemberOf plugin (#7064)
Issue 7138 - test_cleanallruv_repl does not restart supplier3 (#7139)
Issue 6753 - Port ticket47921 test to indirect_cos_test using DSLdapObject (#7134)
Issue 7128 - memory corruption in alias entry plugin (#7131)
Issue 7091 - Duplicate local password policy entries listed (#7092)
Issue 7124 - BDB cursor race condition with transaction isolation (#7125)
Issue 6951 - Dynamic Certificate refresh phase 1 - Search support (#7117)
Issue 7132 - Keep alive entry updated too soon after an offline import (#7133)
Issue 7135 - Not enough space for tests on GH runner (#7136)
Issue 7121 - LeakSanitizer: various leaks during replication (#7122)
Issue 7115 - LeakSanitizer: leak in `slapd_bind_local_user()` (#7116)
Issue 7109 - AddressSanitizer: SEGV ldap/servers/slapd/csnset.c:302 in csnset_dup (#7114)
Issue 7119 - Harden DNA plugin locking for shared server list operations (#7120)
Issue 7084 - UI - schema - sorting attributes breaks expanded row
Issue 6753 - Port ticket47910 test to logconv_test using DSLdapObject (#7098)
Issue 6753 - Port ticket47920 test to ldap_controls_test using DSLdapObject (#7103)
Issue 7007 - Improve paged result search locking
Issue 7041 - Add WebUI test for group member management (#7111)
Issue 3555 - UI - Fix audit issue with npm - glob (#7107)
Issue 7089 - Fix dsconf certificate list (#7090)
Issue 7076, 6992, 6784, 6214 - Fix CI test failures (#7077)
Bump js-yaml from 4.1.0 to 4.1.1 in /src/cockpit/389-console (#7097)
Issue 7069 - Fix error reporting in HAProxy trusted IP parsing (#7094)
Issue 7049 - RetroCL plugin generates invalid LDIF
Issue 7055 - Online initialization of consumers fails with error -23 (#7075)
Issue 6753 - Remove ticket 47900 test (#7087)
Issue 6753 - Port ticket 49008 test (#7080)
Issue 7042 - Enable global_backend_lock when memberofallbackend is enabled (#7043)
Issue 7078 - audit json logging does not encode binary values
Issue 7069 - Add Subnet/CIDR Support for HAProxy Trusted IPs (#7070)
Issue 7056 - DSBLE0007 doesn't generate remediation steps for missing indexes
Issue 6660 - CLI, UI - Improve replication log analyzer usability (#7062)
Issue 7065 - A search filter containing a non normalized DN assertion does not return matching entries (#7068)
Issue 7071 - search filter (&(cn:dn:=groups)) no longer returns results
Issue 7073 - Add NDN cache size configuration and enforcement tests (#7074)
Issue 6753 - Removing ticket 47871 test and porting to DSLdapObject (#7045)
Issue 7041 - CLI/UI - memberOf - no way to add/remove specific group filters
Issue 6753 - Port ticket 48228 test (#7067)
Issue 7029 - Add test case to measure ndn cache performance impact (#7030)
Issue 7061 - CLI/UI - Improve error messages for dsconf localpwp list
Issue 7059 - UI - unable to upload pem file
Issue 7032 - The new ipahealthcheck test ipahealthcheck.ds.backends.BackendsCheck raises CRITICAL issue (#7036)
Issue 7047 - MemberOf plugin logs null attribute name on fixup task completion (#7048)
Issue 7044 - RFE - index sudoHost by default (#7046)
Issue 6846 - Attribute uniqueness is not enforced with modrdn (#7026)
Issue 6784 - Support of Entry cache pinned entries (#6785)
Issue 6979 - Improve the way to detect asynchronous operations in the access logs (#6980)
Issue 6753 - Port ticket 47931 test (#7038)
Issue 7035 - RFE - memberOf - adding scoping for specific groups
Issue - CLI/UI - Add option to delete all replication conflict entries
Issue 7033 - lib389 -  basic plugin status not in JSON
Issue 7023 - UI - if first instance that is loaded is stopped it breaks parts of the UI
Issue 6753 - Removing ticket 47714 test and porting to DSLdapObject (#6946)
Issue 7027 - 389-ds-base OpenScanHub Leaks Detected (#7028)
Issue 6753 - Removing ticket 47676 test and porting to DSLdapObject (#6938)
Issue 6966 - On large DB, unlimited IDL scan limit reduce the SRCH performance (#6967)
Issue 6660 - UI - Improve replication log analysis charts and usability (#6968)
Issue 6753 - Removing ticket 47653MMR test and porting to DSLdapObject (#6926)
Issue 7021 - Units for changing MDB max size are not consistent across different tools (#7022)
Issue 6753 - Removing ticket 49463 test and porting to DSLdapObject (#6899)
Issue 6954 - do not delete referrals on chain_on_update backend
Issue 6982 - UI - MemberOf shared config does not validate DN properly (#6983)
Issue 6740 - Fix FIPS mode test failures in syncrepl, mapping tree, and resource limits (#6993)
Issue 7018 - BUG - prevent stack depth being hit (#7019)
Issue 7014 - memberOf - ignored deferred updates with LMDB
Issue 7002 - restore is failing. (#7003)
Issue 6758 - Fix WebUI monitoring test failure due to FormSelect component deprecation (#7004)
Issue 6753 - Removing ticket 47869 test and porting to DSLdapObject (#7001)
Issue 6753 - Port ticket 49073 test (#7005)
Issue 7016 - fix NULL deref in send_referrals_from_entry() (#7017)
Issue 6753 - Port ticket 47815 test (#7000)
Issue 7010 - Fix certdir underflow in slapd_nss_init() (#7011)
Issue 7012 - improve dscrl dbverify result when backend does not exists (#7013)
Issue 6753 - Removing ticket 477828 test and porting to DSLdapObject (#6989)
Issue 6753 - Removing ticket 47721 test and porting to DSLdapObject (#6973)
Issue 6992 - Improve handling of mismatched ldif import (#6999)
Issue 6997 - Logic error in get_bdb_impl_status prevents bdb2mdb execution (#6998)
Issue 6810 - Deprecate PAM PTA plugin configuration attributes in base entry - fix memleak (#6988)
Issue 6971 - bundle-rust-npm.py: TypeError: argument of type 'NoneType' is not iterable (#6972)
Fix overflow in certmap filter/DN buffers (#6995)
Issue 6753 - Port ticket 49386 test (#6987)
Issue 6753 - Removing ticket 47787 test and porting to DSLdapObject (#6976)
Issue 6753 - Port ticket 49072 test (#6984)
Issue 6990 - UI - Replace deprecated Select components with new TypeaheadSelect (#6996)
Issue 6990 - UI - Fix typeahead Select fields losing values on Enter keypress (#6991)
Issue 6887 - Enhance logconv.py to add support for JSON access logs (#6889)
Issue 6985 - Some logconv CI tests fail with BDB (#6986)
Issue 6891 - JSON logging - add wrapper function that checks for NULL
Issue 4835 - dsconf display an incomplete help with changelog setting (#6769)
Issue 6753 - Port ticket 47963 & 49184 tests (#6970)
Issue 6753 - Port ticket 47829 & 47833 tests
Issue 6977 - UI - Show error message when trying to use unavailable ports (#6978)
Issue 6956 - More UI fixes
Issue 6626 - Fix version
Issue 6900 - Rename test files for proper pytest discovery (#6909)
Issue 6947 - Revise time skew check in healthcheck tool and add option to exclude checks
Issue 6805 - RFE - Multiple backend entry cache tuning
Issue 6753 - Port and fix ticket 47823 tests
Issue 6843 - Add CI tests for logconv.py (#6856)
Issue 6933 - When deferred memberof update is enabled after the server crashed it should not launch memberof fixup task by default (#6935)
Issue  - UI - update Radio handlers and LDAP entries last modified time
Issue 6810 - Deprecate PAM PTA plugin configuration attributes in base entry (#6832)
Issue 6660 - UI - Fix minor typo (#6955)
Issue 6753 - Port ticket 47808 test
Issue 6910 - Fix latest coverity issues
Issue 6753 - Removing ticket 50232 test and porting to DSLdapObject (#6861)
Issue 6919 - numSubordinates/tombstoneNumSubordinates are inconsisten… (#6920)
Issue 6430 - Fix build with bundled libdb
Issue 6342 - buffer owerflow in the function parseVariant (#6927)
Issue 6940 - dsconf monitor server fails with ldapi:// due to absent server ID (#6941)
Issue 6936 - Make user/subtree policy creation idempotent (#6937)
Migrate from PR_Poll to epoll and timerfd. (#6924)
Issue 6928 - The parentId attribute is indexed with improper matching rule
Issue 6753 - Removing ticket 49540 test and porting to DSLdapObject (#6877)
Issue 6904 - Fix config_test.py::test_lmdb_config
Issue 5120 - Fix compilation error
Issue 6929 - Compilation failure with rust-1.89 on Fedora ELN
Issue 6922 - AddressSanitizer: leaks found by acl test suite
Issue 6519 - Add basic dsidm account tests
Issue 6753 - Port ticket test 47573
Issue 6875 - Fix dsidm tests
Issues 6913, 6886, 6250 - Adjust xfail marks (#6914)
Issue 6768 - ns-slapd crashes when a referral is added (#6780)
Issue 6468 - CLI - Fix default error log level
Issue 6181 - RFE - Allow system to manage uid/gid at startup
Issue 6901 - Update changelog trimming logging - fix tests
Issue 6778 - Memory leak in roles_cache_create_object_from_entry part 2
Issue 6897 - Fix disk monitoring test failures and improve test maintainability (#6898)
Issue 6884 - Mask password hashes in audit logs (#6885)
Issue 6594 - Add test for numSubordinates replication consistency with tombstones (#6862)
Issue 6250 - Add test for entryUSN overflow on failed add operations (#6821)
Issue 6895 - Crash if repl keep alive entry can not be created
Issue 6663 - Fix NULL subsystem crash in JSON error logging (#6883)
Issue 6430 - implement read-only bdb (#6431)
Issue 6901 - Update changelog trimming logging
Issue 6880 - Fix ds_logs test suite failure
Issue 6352 - Fix DeprecationWarning
Issue 6800 - Rerun the check in verbose mode on failure
Issue 6893 - Log user that is updated during password modify extended operation
Issue 6772 - dsconf - Replicas with the "consumer" role allow for viewing and modification of their changelog. (#6773)
Issue 6829 - Update parametrized docstring for tests
Issue 6888 - Missing access JSON logging for TLS/Client auth
Issue 6878 - Prevent repeated disconnect logs during shutdown (#6879)
Issue 6872 - compressed log rotation creates files with world readable permission
Issue 6859 - str2filter is not fully applying matching rules
Issue 5733 - Remove outdated Dockerfiles
Issue 6800 - Check for minimal supported Python version
Issue 6868 - UI - schema attribute table expansion break after moving to a new page
Issue 6865 - AddressSanitizer: leak in agmt_update_init_status
Issue 6848 - AddressSanitizer: leak in do_search
Issue 6850 - AddressSanitizer: memory leak in mdb_init
Issue 6854 - Refactor for improved data management (#6855)
Issue 6756 - CLI, UI - Properly handle disabled NDN cache (#6757)
Issue 6857 - uiduniq: allow specifying match rules in the filter
Issue 6852 - Move ds* CLI tools back to /sbin
Issue 6753 - Port ticket tests 48294 & 48295
Issue 6753 - Add 'add_exclude_subtree' and 'remove_exclude_subtree' methods to Attribute uniqueness plugin
Issue 6841 - Cancel Actions when PR is updated
Issue 6838 - lib389/replica.py is using nonexistent datetime.UTC in Python 3.9
Issue 6822 - Backend creation cleanup and Database UI tab error handling (#6823)
Issue 6782 - Improve paged result locking
Issue 6829 - Update parametrized docstring for tests
2025-12-16 17:11:34 -05:00
Viktor Ashirov
783bc25eed Rebuild for Python 3.14.0rc3
Resolves: rhbz#2396668
2025-09-19 17:11:37 +02:00
Yaakov Selkowitz
341bced77e Fix build --with-bundle-libdb, enable for ELN
While the goal is to ship no BDB backend in RHEL 11, this currently cannot
be built without one.  As such, building with a bundled libdb and then
dropping the -bdb subpackage from ELN CRB gets us as close as possible to
that state for now.

https://github.com/389ds/389-ds-base/issues/6944
https://github.com/389ds/389-ds-base/pull/6945
2025-08-21 12:45:59 +02:00
38 changed files with 2242 additions and 28395 deletions

View file

@ -1,48 +0,0 @@
From a2d3ba3456f59b77443085d17b36b424437fbef1 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 11 Aug 2025 13:22:52 +0200
Subject: [PATCH] Issue 5120 - Fix compilation error
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bug Description:
Compilation fails with `-Wunused-function`:
```
ldap/servers/slapd/main.c:290:1: warning: referral_set_defaults defined but not used [-Wunused-function]
290 | referral_set_defaults(void)
| ^~~~~~~~~~~~~~~~~~~~~
make: *** [Makefile:4148: all] Error 2
```
Fix Description:
Remove unused function `referral_set_defaults`.
Fixes: https://github.com/389ds/389-ds-base/issues/5120
---
ldap/servers/slapd/main.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/ldap/servers/slapd/main.c b/ldap/servers/slapd/main.c
index 9d81d80f3..c370588e5 100644
--- a/ldap/servers/slapd/main.c
+++ b/ldap/servers/slapd/main.c
@@ -285,14 +285,6 @@ main_setuid(char *username)
return 0;
}
-/* set good defaults for front-end config in referral mode */
-static void
-referral_set_defaults(void)
-{
- char errorbuf[SLAPI_DSE_RETURNTEXT_SIZE];
- config_set_maxdescriptors(CONFIG_MAXDESCRIPTORS_ATTRIBUTE, "1024", errorbuf, 1);
-}
-
static int
name2exemode(char *progname, char *s, int exit_if_unknown)
{
--
2.49.0

View file

@ -1,127 +0,0 @@
From dcc402a3dd9a8f316388dc31da42786fbc2c1a88 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Thu, 15 May 2025 10:35:27 -0400
Subject: [PATCH] Issue 6782 - Improve paged result locking
Description:
When cleaning a slot, instead of mem setting everything to Zero and restoring
the mutex, manually reset all the values leaving the mutex pointer
intact.
There is also a deadlock possibility when checking for abandoned PR search
in opshared.c, and we were checking a flag value outside of the per_conn
lock.
Relates: https://github.com/389ds/389-ds-base/issues/6782
Reviewed by: progier & spichugi(Thanks!!)
---
ldap/servers/slapd/opshared.c | 10 +++++++++-
ldap/servers/slapd/pagedresults.c | 27 +++++++++++++++++----------
2 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index 5ea919e2d..545518748 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -619,6 +619,14 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
int32_t tlimit;
slapi_pblock_get(pb, SLAPI_SEARCH_TIMELIMIT, &tlimit);
pagedresults_set_timelimit(pb_conn, operation, (time_t)tlimit, pr_idx);
+ /* When using this mutex in conjunction with the main paged
+ * result lock, you must do so in this order:
+ *
+ * --> pagedresults_lock()
+ * --> pagedresults_mutex
+ * <-- pagedresults_mutex
+ * <-- pagedresults_unlock()
+ */
pagedresults_mutex = pageresult_lock_get_addr(pb_conn);
}
@@ -744,11 +752,11 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
pr_search_result = pagedresults_get_search_result(pb_conn, operation, 1 /*locked*/, pr_idx);
if (pr_search_result) {
if (pagedresults_is_abandoned_or_notavailable(pb_conn, 1 /*locked*/, pr_idx)) {
+ pthread_mutex_unlock(pagedresults_mutex);
pagedresults_unlock(pb_conn, pr_idx);
/* Previous operation was abandoned and the simplepaged object is not in use. */
send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL);
rc = LDAP_SUCCESS;
- pthread_mutex_unlock(pagedresults_mutex);
goto free_and_return;
} else {
slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, pr_search_result);
diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c
index 642aefb3d..c3f3aae01 100644
--- a/ldap/servers/slapd/pagedresults.c
+++ b/ldap/servers/slapd/pagedresults.c
@@ -48,7 +48,6 @@ pageresult_lock_get_addr(Connection *conn)
static void
_pr_cleanup_one_slot(PagedResults *prp)
{
- PRLock *prmutex = NULL;
if (!prp) {
return;
}
@@ -56,13 +55,17 @@ _pr_cleanup_one_slot(PagedResults *prp)
/* sr is left; release it. */
prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set));
}
- /* clean up the slot */
- if (prp->pr_mutex) {
- /* pr_mutex is reused; back it up and reset it. */
- prmutex = prp->pr_mutex;
- }
- memset(prp, '\0', sizeof(PagedResults));
- prp->pr_mutex = prmutex;
+
+ /* clean up the slot except the mutex */
+ prp->pr_current_be = NULL;
+ prp->pr_search_result_set = NULL;
+ prp->pr_search_result_count = 0;
+ prp->pr_search_result_set_size_estimate = 0;
+ prp->pr_sort_result_code = 0;
+ prp->pr_timelimit_hr.tv_sec = 0;
+ prp->pr_timelimit_hr.tv_nsec = 0;
+ prp->pr_flags = 0;
+ prp->pr_msgid = 0;
}
/*
@@ -1007,7 +1010,8 @@ op_set_pagedresults(Operation *op)
/*
* pagedresults_lock/unlock -- introduced to protect search results for the
- * asynchronous searches.
+ * asynchronous searches. Do not call these functions while the PR conn lock
+ * is held (e.g. pageresult_lock_get_addr(conn))
*/
void
pagedresults_lock(Connection *conn, int index)
@@ -1045,6 +1049,8 @@ int
pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index)
{
PagedResults *prp;
+ int32_t result;
+
if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) {
return 1; /* not abandoned, but do not want to proceed paged results op. */
}
@@ -1052,10 +1058,11 @@ pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int inde
pthread_mutex_lock(pageresult_lock_get_addr(conn));
}
prp = conn->c_pagedresults.prl_list + index;
+ result = prp->pr_flags & CONN_FLAG_PAGEDRESULTS_ABANDONED;
if (!locked) {
pthread_mutex_unlock(pageresult_lock_get_addr(conn));
}
- return prp->pr_flags & CONN_FLAG_PAGEDRESULTS_ABANDONED;
+ return result;
}
int
--
2.49.0

View file

@ -1,33 +0,0 @@
From 8e341b4967212454f154cd08d7ceb2e2a429e2e8 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 11 Aug 2025 13:19:13 +0200
Subject: [PATCH] Issue 6929 - Compilation failure with rust-1.89 on Fedora ELN
Bug Description:
The `ValueArrayRefIter` struct has a lifetime parameter `'a`.
But in the `iter` method the return type doesn't specify the lifetime parameter.
Fix Description:
Make the lifetime explicit.
Fixes: https://github.com/389ds/389-ds-base/issues/6929
---
src/slapi_r_plugin/src/value.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/slapi_r_plugin/src/value.rs b/src/slapi_r_plugin/src/value.rs
index 2fd35c808..fec74ac25 100644
--- a/src/slapi_r_plugin/src/value.rs
+++ b/src/slapi_r_plugin/src/value.rs
@@ -61,7 +61,7 @@ impl ValueArrayRef {
ValueArrayRef { raw_slapi_val }
}
- pub fn iter(&self) -> ValueArrayRefIter {
+ pub fn iter(&self) -> ValueArrayRefIter<'_> {
ValueArrayRefIter {
idx: 0,
va_ref: &self,
--
2.49.0

View file

@ -0,0 +1,551 @@
From 045fe1a6899b7e4588be7101e81bb78995a713b1 Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Tue, 16 Dec 2025 15:48:35 -0800
Subject: [PATCH] Issue 7150 - Compressed access log rotations skipped,
accesslog-list out of sync (#7151)
Description: Accept `.gz`-suffixed rotated log filenames when
rebuilding rotation info and checking previous logs, preventing
compressed rotations from being dropped from the internal list.
Add regression tests to stress log rotation with compression,
verify `nsslapd-accesslog-list` stays in sync, and guard against
crashes when flushing buffered logs during rotation.
Minor doc fix in test.
Fixes: https://github.com/389ds/389-ds-base/issues/7150
Reviewed by: @progier389 (Thanks!)
---
.../suites/logging/log_flush_rotation_test.py | 341 +++++++++++++++++-
ldap/servers/slapd/log.c | 99 +++--
2 files changed, 402 insertions(+), 38 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py b/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py
index b33a622e1..864ba9c5d 100644
--- a/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py
+++ b/dirsrvtests/tests/suites/logging/log_flush_rotation_test.py
@@ -6,6 +6,7 @@
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
#
+import glob
import os
import logging
import time
@@ -13,14 +14,351 @@ import pytest
from lib389._constants import DEFAULT_SUFFIX, PW_DM
from lib389.tasks import ImportTask
from lib389.idm.user import UserAccounts
+from lib389.idm.domain import Domain
+from lib389.idm.directorymanager import DirectoryManager
from lib389.topologies import topology_st as topo
log = logging.getLogger(__name__)
+def remove_rotated_access_logs(inst):
+ """
+ Remove all rotated access log files to start fresh for each test.
+ This prevents log files from previous tests affecting current test results.
+ """
+ log_dir = inst.get_log_dir()
+ patterns = [
+ f'{log_dir}/access.2*', # Uncompressed rotated logs
+ f'{log_dir}/access.*.gz', # Compressed rotated logs
+ ]
+ for pattern in patterns:
+ for log_file in glob.glob(pattern):
+ try:
+ os.remove(log_file)
+ log.info(f"Removed old log file: {log_file}")
+ except OSError as e:
+ log.warning(f"Could not remove {log_file}: {e}")
+
+
+def reset_access_log_config(inst):
+ """
+ Reset access log configuration to default values.
+ """
+ inst.config.set('nsslapd-accesslog-compress', 'off')
+ inst.config.set('nsslapd-accesslog-maxlogsize', '100')
+ inst.config.set('nsslapd-accesslog-maxlogsperdir', '10')
+ inst.config.set('nsslapd-accesslog-logrotationsync-enabled', 'off')
+ inst.config.set('nsslapd-accesslog-logbuffering', 'on')
+ inst.config.set('nsslapd-accesslog-logexpirationtime', '-1')
+ inst.config.set('nsslapd-accesslog-logminfreediskspace', '5')
+
+
+def generate_heavy_load(inst, suffix, iterations=50):
+ """
+ Generate heavy LDAP load to fill access log quickly.
+ Performs multiple operations: searches, modifies, binds to populate logs.
+ """
+ for i in range(iterations):
+ suffix.replace('description', f'iteration_{i}')
+ suffix.get_attr_val('description')
+
+
+def count_access_logs(log_dir, compressed_only=False):
+ """
+ Count access log files in the log directory.
+ Returns count of rotated access logs (not including the active 'access' file).
+ """
+ if compressed_only:
+ pattern = f'{log_dir}/access.*.gz'
+ else:
+ pattern = f'{log_dir}/access.2*'
+ log_files = glob.glob(pattern)
+ return len(log_files)
+
+
+def test_log_pileup_with_compression(topo):
+ """Test that log rotation properly deletes old logs when compression is enabled.
+
+ :id: fa1bfce8-b6d3-4520-a0a8-bead14fa5838
+ :setup: Standalone Instance
+ :steps:
+ 1. Clean up existing rotated logs and reset configuration
+ 2. Enable access log compression
+ 3. Set strict log limits (small maxlogsperdir)
+ 4. Disable log expiration to test count-based deletion
+ 5. Generate heavy load to create many log rotations
+ 6. Verify log count does not exceed maxlogsperdir limit
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ 5. Success
+ 6. Log count should be at or below maxlogsperdir + small buffer
+ """
+
+ inst = topo.standalone
+ suffix = Domain(inst, DEFAULT_SUFFIX)
+ log_dir = inst.get_log_dir()
+
+ # Clean up before test
+ remove_rotated_access_logs(inst)
+ reset_access_log_config(inst)
+ inst.restart()
+
+ max_logs = 5
+ inst.config.set('nsslapd-accesslog-compress', 'on')
+ inst.config.set('nsslapd-accesslog-maxlogsperdir', str(max_logs))
+ inst.config.set('nsslapd-accesslog-maxlogsize', '1') # 1MB to trigger rotation
+ inst.config.set('nsslapd-accesslog-logrotationsync-enabled', 'off')
+ inst.config.set('nsslapd-accesslog-logbuffering', 'off')
+
+ inst.config.set('nsslapd-accesslog-logexpirationtime', '-1')
+
+ inst.config.set('nsslapd-accesslog-logminfreediskspace', '5')
+
+ inst.restart()
+ time.sleep(2)
+
+ target_logs = max_logs * 3
+ for i in range(target_logs):
+ log.info(f"Generating load for log rotation {i+1}/{target_logs}")
+ generate_heavy_load(inst, suffix, iterations=150)
+ time.sleep(1) # Wait for rotation
+
+ time.sleep(3)
+
+ logs_on_disk = count_access_logs(log_dir)
+ log.info(f"Configured maxlogsperdir: {max_logs}")
+ log.info(f"Actual rotated logs on disk: {logs_on_disk}")
+
+ all_access_logs = glob.glob(f'{log_dir}/access*')
+ log.info(f"All access log files: {all_access_logs}")
+
+ max_allowed = max_logs + 2
+ assert logs_on_disk <= max_allowed, (
+ f"Log rotation failed to delete old files! "
+ f"Expected at most {max_allowed} rotated logs (maxlogsperdir={max_logs} + 2 buffer), "
+ f"but found {logs_on_disk}. The server has lost track of the file list."
+ )
+
+
+@pytest.mark.parametrize("compress_enabled", ["on", "off"])
+def test_accesslog_list_mismatch(topo, compress_enabled):
+ """Test that nsslapd-accesslog-list stays synchronized with actual log files.
+
+ :id: 0a8a46a6-cae7-43bd-8b64-5e3481480cd3
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Clean up existing rotated logs and reset configuration
+ 2. Configure log rotation with compression enabled/disabled
+ 3. Generate activity to trigger multiple rotations
+ 4. Get the nsslapd-accesslog-list attribute
+ 5. Compare with actual files on disk
+ 6. Verify they match (accounting for .gz extension when enabled)
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ 5. Success
+ 6. The list attribute should match actual files on disk
+ """
+
+ inst = topo.standalone
+ suffix = Domain(inst, DEFAULT_SUFFIX)
+ log_dir = inst.get_log_dir()
+ compression_on = compress_enabled == "on"
+
+ # Clean up before test
+ remove_rotated_access_logs(inst)
+ reset_access_log_config(inst)
+ inst.restart()
+
+ inst.config.set('nsslapd-accesslog-compress', compress_enabled)
+ inst.config.set('nsslapd-accesslog-maxlogsize', '1')
+ inst.config.set('nsslapd-accesslog-maxlogsperdir', '10')
+ inst.config.set('nsslapd-accesslog-logrotationsync-enabled', 'off')
+ inst.config.set('nsslapd-accesslog-logbuffering', 'off')
+ inst.config.set('nsslapd-accesslog-logexpirationtime', '-1')
+
+ inst.restart()
+ time.sleep(2)
+
+ for i in range(15):
+ suffix_note = "(no compression)" if not compression_on else ""
+ log.info(f"Generating load for rotation {i+1}/15 {suffix_note}")
+ generate_heavy_load(inst, suffix, iterations=150)
+ time.sleep(1)
+
+ time.sleep(3)
+
+ accesslog_list = inst.config.get_attr_vals_utf8('nsslapd-accesslog-list')
+ log.info(f"nsslapd-accesslog-list entries (compress={compress_enabled}): {len(accesslog_list)}")
+ log.info(f"nsslapd-accesslog-list (compress={compress_enabled}): {accesslog_list}")
+
+ disk_files = glob.glob(f'{log_dir}/access.2*')
+ log.info(f"Actual files on disk (compress={compress_enabled}): {len(disk_files)}")
+ log.info(f"Disk files (compress={compress_enabled}): {disk_files}")
+
+ disk_files_for_compare = set()
+ for fpath in disk_files:
+ if compression_on and fpath.endswith('.gz'):
+ disk_files_for_compare.add(fpath[:-3])
+ else:
+ disk_files_for_compare.add(fpath)
+
+ list_files_set = set(accesslog_list)
+ missing_from_disk = list_files_set - disk_files_for_compare
+ extra_on_disk = disk_files_for_compare - list_files_set
+
+ if missing_from_disk:
+ log.error(
+ f"[compress={compress_enabled}] Files in list but NOT on disk: {missing_from_disk}"
+ )
+ if extra_on_disk:
+ log.warning(
+ f"[compress={compress_enabled}] Files on disk but NOT in list: {extra_on_disk}"
+ )
+
+ assert not missing_from_disk, (
+ f"nsslapd-accesslog-list mismatch (compress={compress_enabled})! "
+ f"Files listed but missing from disk: {missing_from_disk}. "
+ f"This indicates the server's internal list is out of sync with actual files."
+ )
+
+ if len(extra_on_disk) > 2:
+ log.warning(
+ f"Potential log tracking issue (compress={compress_enabled}): "
+ f"{len(extra_on_disk)} files on disk are not tracked in the accesslog-list: "
+ f"{extra_on_disk}"
+ )
+
+
+def test_accesslog_list_mixed_compression(topo):
+ """Test that nsslapd-accesslog-list correctly tracks both compressed and uncompressed logs.
+
+ :id: 11b088cd-23be-407d-ad16-4ce2e12da09e
+ :setup: Standalone Instance
+ :steps:
+ 1. Clean up existing rotated logs and reset configuration
+ 2. Create rotated logs with compression OFF
+ 3. Enable compression and create more rotated logs
+ 4. Get the nsslapd-accesslog-list attribute
+ 5. Compare with actual files on disk
+ 6. Verify all files are correctly tracked (uncompressed and compressed)
+ :expectedresults:
+ 1. Success
+ 2. Success - uncompressed rotated logs created
+ 3. Success - compressed rotated logs created
+ 4. Success
+ 5. Success
+ 6. The list should contain base filenames (without .gz) that
+ correspond to files on disk (either as-is or with .gz suffix)
+ """
+
+ inst = topo.standalone
+ suffix = Domain(inst, DEFAULT_SUFFIX)
+ log_dir = inst.get_log_dir()
+
+ # Clean up before test
+ remove_rotated_access_logs(inst)
+ reset_access_log_config(inst)
+ inst.restart()
+
+ inst.config.set('nsslapd-accesslog-compress', 'off')
+ inst.config.set('nsslapd-accesslog-maxlogsize', '1')
+ inst.config.set('nsslapd-accesslog-maxlogsperdir', '20')
+ inst.config.set('nsslapd-accesslog-logrotationsync-enabled', 'off')
+ inst.config.set('nsslapd-accesslog-logbuffering', 'off')
+ inst.config.set('nsslapd-accesslog-logexpirationtime', '-1')
+
+ inst.restart()
+ time.sleep(2)
+
+ for i in range(15):
+ log.info(f"Generating load for uncompressed rotation {i+1}/15")
+ generate_heavy_load(inst, suffix, iterations=150)
+ time.sleep(1)
+
+ time.sleep(2)
+
+ # Check what we have so far
+ uncompressed_files = glob.glob(f'{log_dir}/access.2*')
+ log.info(f"Files on disk after uncompressed phase: {uncompressed_files}")
+
+ inst.config.set('nsslapd-accesslog-compress', 'on')
+ inst.restart()
+ time.sleep(2)
+
+ for i in range(15):
+ log.info(f"Generating load for compressed rotation {i+1}/15")
+ generate_heavy_load(inst, suffix, iterations=150)
+ time.sleep(1)
+
+ time.sleep(3)
+
+ accesslog_list = inst.config.get_attr_vals_utf8('nsslapd-accesslog-list')
+
+ disk_files = glob.glob(f'{log_dir}/access.2*')
+
+ log.info(f"nsslapd-accesslog-list entries: {len(accesslog_list)}")
+ log.info(f"nsslapd-accesslog-list: {sorted(accesslog_list)}")
+ log.info(f"Actual files on disk: {len(disk_files)}")
+ log.info(f"Disk files: {sorted(disk_files)}")
+
+ compressed_on_disk = [f for f in disk_files if f.endswith('.gz')]
+ uncompressed_on_disk = [f for f in disk_files if not f.endswith('.gz')]
+ log.info(f"Compressed files on disk: {compressed_on_disk}")
+ log.info(f"Uncompressed files on disk: {uncompressed_on_disk}")
+
+ list_files_set = set(accesslog_list)
+
+ disk_files_base = set()
+ for fpath in disk_files:
+ if fpath.endswith('.gz'):
+ disk_files_base.add(fpath[:-3]) # Strip .gz
+ else:
+ disk_files_base.add(fpath)
+
+ missing_from_disk = list_files_set - disk_files_base
+
+ extra_on_disk = disk_files_base - list_files_set
+
+ if missing_from_disk:
+ log.error(f"Files in list but NOT on disk: {missing_from_disk}")
+ if extra_on_disk:
+ log.warning(f"Files on disk but NOT in list: {extra_on_disk}")
+
+ assert not missing_from_disk, (
+ f"nsslapd-accesslog-list contains stale entries! "
+ f"Files in list but not on disk (as base or .gz): {missing_from_disk}"
+ )
+
+ for list_file in accesslog_list:
+ exists_uncompressed = os.path.exists(list_file)
+ exists_compressed = os.path.exists(list_file + '.gz')
+ assert exists_uncompressed or exists_compressed, (
+ f"File in accesslog-list does not exist on disk: {list_file} "
+ f"(checked both {list_file} and {list_file}.gz)"
+ )
+ if exists_compressed and not exists_uncompressed:
+ log.info(f" {list_file} -> exists as .gz (compressed)")
+ elif exists_uncompressed:
+ log.info(f" {list_file} -> exists (uncompressed)")
+
+ if len(extra_on_disk) > 1:
+ log.warning(
+ f"Some files on disk are not tracked in accesslog-list: {extra_on_disk}"
+ )
+
+ log.info("Mixed compression test completed successfully")
+
+
def test_log_flush_and_rotation_crash(topo):
- """Make sure server does not crash whening flushing a buffer and rotating
+ """Make sure server does not crash when flushing a buffer and rotating
the log at the same time
:id: d4b0af2f-48b2-45f5-ae8b-f06f692c3133
@@ -36,6 +374,7 @@ def test_log_flush_and_rotation_crash(topo):
3. Success
4. Success
"""
+ # NOTE: This test is placed last as it may affect the suffix state.
inst = topo.standalone
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index 27bb4bc15..ea744ac1e 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -137,6 +137,7 @@ static void vslapd_log_emergency_error(LOGFD fp, const char *msg, int locked);
static int get_syslog_loglevel(int loglevel);
static void log_external_libs_debug_openldap_print(char *buffer);
static int log__fix_rotationinfof(char *pathname);
+static int log__validate_rotated_logname(const char *timestamp_str, PRBool *is_compressed);
static int
get_syslog_loglevel(int loglevel)
@@ -375,7 +376,7 @@ g_log_init()
loginfo.log_security_fdes = NULL;
loginfo.log_security_file = NULL;
loginfo.log_securityinfo_file = NULL;
- loginfo.log_numof_access_logs = 1;
+ loginfo.log_numof_security_logs = 1;
loginfo.log_security_logchain = NULL;
loginfo.log_security_buffer = log_create_buffer(LOG_BUFFER_MAXSIZE);
loginfo.log_security_compress = cfg->securitylog_compress;
@@ -3422,7 +3423,7 @@ log__open_accesslogfile(int logfile_state, int locked)
}
} else if (loginfo.log_access_compress) {
if (compress_log_file(newfile, loginfo.log_access_mode) != 0) {
- slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
+ slapi_log_err(SLAPI_LOG_ERR, "log__open_accesslogfile",
"failed to compress rotated access log (%s)\n",
newfile);
} else {
@@ -4825,6 +4826,50 @@ log__delete_rotated_logs()
loginfo.log_error_logchain = NULL;
}
+/*
+ * log__validate_rotated_logname
+ *
+ * Validates that a log filename timestamp suffix matches the expected format:
+ * YYYYMMDD-HHMMSS (15 chars) or YYYYMMDD-HHMMSS.gz (18 chars) for compressed files.
+ * Uses regex pattern: ^[0-9]{8}-[0-9]{6}(\.gz)?$
+ *
+ * \param timestamp_str The timestamp portion of the log filename (after the first '.')
+ * \param is_compressed Output parameter set to PR_TRUE if the file has .gz suffix
+ * \return 1 if valid, 0 if invalid
+ */
+static int
+log__validate_rotated_logname(const char *timestamp_str, PRBool *is_compressed)
+{
+ Slapi_Regex *re = NULL;
+ char *re_error = NULL;
+ int rc = 0;
+
+ /* Match YYYYMMDD-HHMMSS with optional .gz suffix */
+ static const char *pattern = "^[0-9]{8}-[0-9]{6}(\\.gz)?$";
+
+ *is_compressed = PR_FALSE;
+
+ re = slapi_re_comp(pattern, &re_error);
+ if (re == NULL) {
+ slapi_log_err(SLAPI_LOG_ERR, "log__validate_rotated_logname",
+ "Failed to compile regex: %s\n", re_error ? re_error : "unknown error");
+ slapi_ch_free_string(&re_error);
+ return 0;
+ }
+
+ rc = slapi_re_exec_nt(re, timestamp_str);
+ if (rc == 1) {
+ /* Check if compressed by looking for .gz suffix */
+ size_t len = strlen(timestamp_str);
+ if (len >= 3 && strcmp(timestamp_str + len - 3, ".gz") == 0) {
+ *is_compressed = PR_TRUE;
+ }
+ }
+
+ slapi_re_free(re);
+ return rc == 1 ? 1 : 0;
+}
+
#define ERRORSLOG 1
#define ACCESSLOG 2
#define AUDITLOG 3
@@ -4907,31 +4952,19 @@ log__fix_rotationinfof(char *pathname)
}
} else if (0 == strncmp(log_type, dirent->name, strlen(log_type)) &&
(p = strchr(dirent->name, '.')) != NULL &&
- NULL != strchr(p, '-')) /* e.g., errors.20051123-165135 */
+ NULL != strchr(p, '-')) /* e.g., errors.20051123-165135 or errors.20051123-165135.gz */
{
struct logfileinfo *logp;
- char *q;
- int ignoreit = 0;
-
- for (q = ++p; q && *q; q++) {
- if (*q != '-' &&
- *q != '.' && /* .gz */
- *q != 'g' &&
- *q != 'z' &&
- !isdigit(*q))
- {
- ignoreit = 1;
- }
- }
- if (ignoreit || (q - p != 15)) {
+ PRBool is_compressed = PR_FALSE;
+
+ /* Skip the '.' to get the timestamp portion */
+ p++;
+ if (!log__validate_rotated_logname(p, &is_compressed)) {
continue;
}
logp = (struct logfileinfo *)slapi_ch_malloc(sizeof(struct logfileinfo));
logp->l_ctime = log_reverse_convert_time(p);
- logp->l_compressed = PR_FALSE;
- if (strcmp(p + strlen(p) - 3, ".gz") == 0) {
- logp->l_compressed = PR_TRUE;
- }
+ logp->l_compressed = is_compressed;
PR_snprintf(rotated_log, rotated_log_len, "%s/%s",
logsdir, dirent->name);
@@ -5098,23 +5131,15 @@ log__check_prevlogs(FILE *fp, char *pathname)
for (dirent = PR_ReadDir(dirptr, dirflags); dirent;
dirent = PR_ReadDir(dirptr, dirflags)) {
if (0 == strncmp(log_type, dirent->name, strlen(log_type)) &&
- (p = strrchr(dirent->name, '.')) != NULL &&
- NULL != strchr(p, '-')) { /* e.g., errors.20051123-165135 */
- char *q;
- int ignoreit = 0;
-
- for (q = ++p; q && *q; q++) {
- if (*q != '-' &&
- *q != '.' && /* .gz */
- *q != 'g' &&
- *q != 'z' &&
- !isdigit(*q))
- {
- ignoreit = 1;
- }
- }
- if (ignoreit || (q - p != 15))
+ (p = strchr(dirent->name, '.')) != NULL &&
+ NULL != strchr(p, '-')) { /* e.g., errors.20051123-165135 or errors.20051123-165135.gz */
+ PRBool is_compressed = PR_FALSE;
+
+ /* Skip the '.' to get the timestamp portion */
+ p++;
+ if (!log__validate_rotated_logname(p, &is_compressed)) {
continue;
+ }
fseek(fp, 0, SEEK_SET);
buf[BUFSIZ - 1] = '\0';
--
2.52.0

View file

@ -1,488 +0,0 @@
From 388d5ef9b64208db26373fc3b1b296a82ea689ba Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Fri, 27 Jun 2025 18:43:39 -0700
Subject: [PATCH] Issue 6822 - Backend creation cleanup and Database UI tab
error handling (#6823)
Description: Add rollback functionality when mapping tree creation fails
during backend creation to prevent orphaned backends.
Improve error handling in Database, Replication and Monitoring UI tabs
to gracefully handle backend get-tree command failures.
Fixes: https://github.com/389ds/389-ds-base/issues/6822
Reviewed by: @mreynolds389 (Thanks!)
---
src/cockpit/389-console/src/database.jsx | 119 ++++++++------
src/cockpit/389-console/src/monitor.jsx | 172 +++++++++++---------
src/cockpit/389-console/src/replication.jsx | 55 ++++---
src/lib389/lib389/backend.py | 18 +-
4 files changed, 210 insertions(+), 154 deletions(-)
diff --git a/src/cockpit/389-console/src/database.jsx b/src/cockpit/389-console/src/database.jsx
index c0c4be414..276125dfc 100644
--- a/src/cockpit/389-console/src/database.jsx
+++ b/src/cockpit/389-console/src/database.jsx
@@ -478,6 +478,59 @@ export class Database extends React.Component {
}
loadSuffixTree(fullReset) {
+ const treeData = [
+ {
+ name: _("Global Database Configuration"),
+ icon: <CogIcon />,
+ id: "dbconfig",
+ },
+ {
+ name: _("Chaining Configuration"),
+ icon: <ExternalLinkAltIcon />,
+ id: "chaining-config",
+ },
+ {
+ name: _("Backups & LDIFs"),
+ icon: <CopyIcon />,
+ id: "backups",
+ },
+ {
+ name: _("Password Policies"),
+ id: "pwp",
+ icon: <KeyIcon />,
+ children: [
+ {
+ name: _("Global Policy"),
+ icon: <HomeIcon />,
+ id: "pwpolicy",
+ },
+ {
+ name: _("Local Policies"),
+ icon: <UsersIcon />,
+ id: "localpwpolicy",
+ },
+ ],
+ defaultExpanded: true
+ },
+ {
+ name: _("Suffixes"),
+ icon: <CatalogIcon />,
+ id: "suffixes-tree",
+ children: [],
+ defaultExpanded: true,
+ action: (
+ <Button
+ onClick={this.handleShowSuffixModal}
+ variant="plain"
+ aria-label="Create new suffix"
+ title={_("Create new suffix")}
+ >
+ <PlusIcon />
+ </Button>
+ ),
+ }
+ ];
+
const cmd = [
"dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"backend", "get-tree",
@@ -491,58 +544,20 @@ export class Database extends React.Component {
suffixData = JSON.parse(content);
this.processTree(suffixData);
}
- const treeData = [
- {
- name: _("Global Database Configuration"),
- icon: <CogIcon />,
- id: "dbconfig",
- },
- {
- name: _("Chaining Configuration"),
- icon: <ExternalLinkAltIcon />,
- id: "chaining-config",
- },
- {
- name: _("Backups & LDIFs"),
- icon: <CopyIcon />,
- id: "backups",
- },
- {
- name: _("Password Policies"),
- id: "pwp",
- icon: <KeyIcon />,
- children: [
- {
- name: _("Global Policy"),
- icon: <HomeIcon />,
- id: "pwpolicy",
- },
- {
- name: _("Local Policies"),
- icon: <UsersIcon />,
- id: "localpwpolicy",
- },
- ],
- defaultExpanded: true
- },
- {
- name: _("Suffixes"),
- icon: <CatalogIcon />,
- id: "suffixes-tree",
- children: suffixData,
- defaultExpanded: true,
- action: (
- <Button
- onClick={this.handleShowSuffixModal}
- variant="plain"
- aria-label="Create new suffix"
- title={_("Create new suffix")}
- >
- <PlusIcon />
- </Button>
- ),
- }
- ];
+
+ let current_node = this.state.node_name;
+ if (fullReset) {
+ current_node = DB_CONFIG;
+ }
+
+ treeData[4].children = suffixData; // suffixes node
+ this.setState(() => ({
+ nodes: treeData,
+ node_name: current_node,
+ }), this.loadAttrs);
+ })
+ .fail(err => {
+ // Handle backend get-tree failure gracefully
let current_node = this.state.node_name;
if (fullReset) {
current_node = DB_CONFIG;
diff --git a/src/cockpit/389-console/src/monitor.jsx b/src/cockpit/389-console/src/monitor.jsx
index ad48d1f87..91a8e3e37 100644
--- a/src/cockpit/389-console/src/monitor.jsx
+++ b/src/cockpit/389-console/src/monitor.jsx
@@ -200,6 +200,84 @@ export class Monitor extends React.Component {
}
loadSuffixTree(fullReset) {
+ const basicData = [
+ {
+ name: _("Server Statistics"),
+ icon: <ClusterIcon />,
+ id: "server-monitor",
+ type: "server",
+ },
+ {
+ name: _("Replication"),
+ icon: <TopologyIcon />,
+ id: "replication-monitor",
+ type: "replication",
+ defaultExpanded: true,
+ children: [
+ {
+ name: _("Synchronization Report"),
+ icon: <MonitoringIcon />,
+ id: "sync-report",
+ item: "sync-report",
+ type: "repl-mon",
+ },
+ {
+ name: _("Log Analysis"),
+ icon: <MonitoringIcon />,
+ id: "log-analysis",
+ item: "log-analysis",
+ type: "repl-mon",
+ }
+ ],
+ },
+ {
+ name: _("Database"),
+ icon: <DatabaseIcon />,
+ id: "database-monitor",
+ type: "database",
+ children: [], // Will be populated with treeData on success
+ defaultExpanded: true,
+ },
+ {
+ name: _("Logging"),
+ icon: <CatalogIcon />,
+ id: "log-monitor",
+ defaultExpanded: true,
+ children: [
+ {
+ name: _("Access Log"),
+ icon: <BookIcon size="sm" />,
+ id: "access-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Audit Log"),
+ icon: <BookIcon size="sm" />,
+ id: "audit-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Audit Failure Log"),
+ icon: <BookIcon size="sm" />,
+ id: "auditfail-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Errors Log"),
+ icon: <BookIcon size="sm" />,
+ id: "error-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Security Log"),
+ icon: <BookIcon size="sm" />,
+ id: "security-log-monitor",
+ type: "log",
+ },
+ ]
+ },
+ ];
+
const cmd = [
"dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"backend", "get-tree",
@@ -210,83 +288,7 @@ export class Monitor extends React.Component {
.done(content => {
const treeData = JSON.parse(content);
this.processTree(treeData);
- const basicData = [
- {
- name: _("Server Statistics"),
- icon: <ClusterIcon />,
- id: "server-monitor",
- type: "server",
- },
- {
- name: _("Replication"),
- icon: <TopologyIcon />,
- id: "replication-monitor",
- type: "replication",
- defaultExpanded: true,
- children: [
- {
- name: _("Synchronization Report"),
- icon: <MonitoringIcon />,
- id: "sync-report",
- item: "sync-report",
- type: "repl-mon",
- },
- {
- name: _("Log Analysis"),
- icon: <MonitoringIcon />,
- id: "log-analysis",
- item: "log-analysis",
- type: "repl-mon",
- }
- ],
- },
- {
- name: _("Database"),
- icon: <DatabaseIcon />,
- id: "database-monitor",
- type: "database",
- children: [],
- defaultExpanded: true,
- },
- {
- name: _("Logging"),
- icon: <CatalogIcon />,
- id: "log-monitor",
- defaultExpanded: true,
- children: [
- {
- name: _("Access Log"),
- icon: <BookIcon size="sm" />,
- id: "access-log-monitor",
- type: "log",
- },
- {
- name: _("Audit Log"),
- icon: <BookIcon size="sm" />,
- id: "audit-log-monitor",
- type: "log",
- },
- {
- name: _("Audit Failure Log"),
- icon: <BookIcon size="sm" />,
- id: "auditfail-log-monitor",
- type: "log",
- },
- {
- name: _("Errors Log"),
- icon: <BookIcon size="sm" />,
- id: "error-log-monitor",
- type: "log",
- },
- {
- name: _("Security Log"),
- icon: <BookIcon size="sm" />,
- id: "security-log-monitor",
- type: "log",
- },
- ]
- },
- ];
+
let current_node = this.state.node_name;
let type = this.state.node_type;
if (fullReset) {
@@ -296,6 +298,22 @@ export class Monitor extends React.Component {
basicData[2].children = treeData; // database node
this.processReplSuffixes(basicData[1].children);
+ this.setState(() => ({
+ nodes: basicData,
+ node_name: current_node,
+ node_type: type,
+ }), this.update_tree_nodes);
+ })
+ .fail(err => {
+ // Handle backend get-tree failure gracefully
+ let current_node = this.state.node_name;
+ let type = this.state.node_type;
+ if (fullReset) {
+ current_node = "server-monitor";
+ type = "server";
+ }
+ this.processReplSuffixes(basicData[1].children);
+
this.setState(() => ({
nodes: basicData,
node_name: current_node,
diff --git a/src/cockpit/389-console/src/replication.jsx b/src/cockpit/389-console/src/replication.jsx
index fa492fd2a..aa535bfc7 100644
--- a/src/cockpit/389-console/src/replication.jsx
+++ b/src/cockpit/389-console/src/replication.jsx
@@ -177,6 +177,16 @@ export class Replication extends React.Component {
loaded: false
});
+ const basicData = [
+ {
+ name: _("Suffixes"),
+ icon: <TopologyIcon />,
+ id: "repl-suffixes",
+ children: [],
+ defaultExpanded: true
+ }
+ ];
+
const cmd = [
"dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"backend", "get-tree",
@@ -199,15 +209,7 @@ export class Replication extends React.Component {
}
}
}
- const basicData = [
- {
- name: _("Suffixes"),
- icon: <TopologyIcon />,
- id: "repl-suffixes",
- children: [],
- defaultExpanded: true
- }
- ];
+
let current_node = this.state.node_name;
let current_type = this.state.node_type;
let replicated = this.state.node_replicated;
@@ -258,6 +260,19 @@ export class Replication extends React.Component {
}
basicData[0].children = treeData;
+ this.setState({
+ nodes: basicData,
+ node_name: current_node,
+ node_type: current_type,
+ node_replicated: replicated,
+ }, () => { this.update_tree_nodes() });
+ })
+ .fail(err => {
+ // Handle backend get-tree failure gracefully
+ let current_node = this.state.node_name;
+ let current_type = this.state.node_type;
+ let replicated = this.state.node_replicated;
+
this.setState({
nodes: basicData,
node_name: current_node,
@@ -905,18 +920,18 @@ export class Replication extends React.Component {
disableTree: false
});
});
- })
- .fail(err => {
- const errMsg = JSON.parse(err);
- this.props.addNotification(
- "error",
- cockpit.format(_("Error loading replication agreements configuration - $0"), errMsg.desc)
- );
- this.setState({
- suffixLoading: false,
- disableTree: false
+ })
+ .fail(err => {
+ const errMsg = JSON.parse(err);
+ this.props.addNotification(
+ "error",
+ cockpit.format(_("Error loading replication agreements configuration - $0"), errMsg.desc)
+ );
+ this.setState({
+ suffixLoading: false,
+ disableTree: false
+ });
});
- });
})
.fail(err => {
// changelog failure
diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py
index 1d000ed66..53f15b6b0 100644
--- a/src/lib389/lib389/backend.py
+++ b/src/lib389/lib389/backend.py
@@ -694,24 +694,32 @@ class Backend(DSLdapObject):
parent_suffix = properties.pop('parent', False)
# Okay, now try to make the backend.
- super(Backend, self).create(dn, properties, basedn)
+ backend_obj = super(Backend, self).create(dn, properties, basedn)
# We check if the mapping tree exists in create, so do this *after*
if create_mapping_tree is True:
- properties = {
+ mapping_tree_properties = {
'cn': self._nprops_stash['nsslapd-suffix'],
'nsslapd-state': 'backend',
'nsslapd-backend': self._nprops_stash['cn'],
}
if parent_suffix:
# This is a subsuffix, set the parent suffix
- properties['nsslapd-parent-suffix'] = parent_suffix
- self._mts.create(properties=properties)
+ mapping_tree_properties['nsslapd-parent-suffix'] = parent_suffix
+
+ try:
+ self._mts.create(properties=mapping_tree_properties)
+ except Exception as e:
+ try:
+ backend_obj.delete()
+ except Exception as cleanup_error:
+ self._instance.log.error(f"Failed to cleanup backend after mapping tree creation failure: {cleanup_error}")
+ raise e
# We can't create the sample entries unless a mapping tree was installed.
if sample_entries is not False and create_mapping_tree is True:
self.create_sample_entries(sample_entries)
- return self
+ return backend_obj
def delete(self):
"""Deletes the backend, it's mapping tree and all related indices.
--
2.49.0

View file

@ -0,0 +1,37 @@
From 3b133a5ed6fa89939a569fe6130b325726f4e50c Mon Sep 17 00:00:00 2001
From: Stanislav Levin <slev@altlinux.org>
Date: Fri, 19 Dec 2025 14:52:48 +0300
Subject: [PATCH] Sync lib389 version to 3.1.4 (#7161)
Prepared with:
$ python3 validate_version.py --update
ERROR: Version mismatch detected!
Main project version: 3.1.4
lib389 version: 3.1.3
SUCCESS: Updated lib389 version to 3.1.4 in pyproject.toml
Fixes: https://github.com/389ds/389-ds-base/issues/7160
Reviewed by: @progier (Thanks!)
Signed-off-by: Stanislav Levin <slev@altlinux.org>
---
src/lib389/pyproject.toml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/lib389/pyproject.toml b/src/lib389/pyproject.toml
index 1cd840713..85c0c5141 100644
--- a/src/lib389/pyproject.toml
+++ b/src/lib389/pyproject.toml
@@ -16,7 +16,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "lib389"
-version = "3.1.3" # Should match the main 389-ds-base version
+version = "3.1.4" # Should match the main 389-ds-base version
description = "A library for accessing, testing, and configuring the 389 Directory Server"
readme = "README.md"
license = {text = "GPL-3.0-or-later"}
--
2.52.0

View file

@ -1,515 +0,0 @@
From 9da3349d53f4073740ddb1aca97713e13cb40cd0 Mon Sep 17 00:00:00 2001
From: Lenka Doudova <lryznaro@redhat.com>
Date: Mon, 9 Jun 2025 15:15:04 +0200
Subject: [PATCH] Issue 6753 - Add 'add_exclude_subtree' and
'remove_exclude_subtree' methods to Attribute uniqueness plugin
Description:
Adding 'add_exclude_subtree' and 'remove_exclude_subtree' methods to AttributeUniquenessPlugin in
order to be able to easily add or remove an exclude subtree.
Porting ticket 47927 test to
dirsrvtests/tests/suites/plugins/attruniq_test.py
Relates: #6753
Author: Lenka Doudova
Reviewers: Simon Pichugin, Mark Reynolds
---
.../tests/suites/plugins/attruniq_test.py | 171 +++++++++++
dirsrvtests/tests/tickets/ticket47927_test.py | 267 ------------------
src/lib389/lib389/plugins.py | 10 +
3 files changed, 181 insertions(+), 267 deletions(-)
delete mode 100644 dirsrvtests/tests/tickets/ticket47927_test.py
diff --git a/dirsrvtests/tests/suites/plugins/attruniq_test.py b/dirsrvtests/tests/suites/plugins/attruniq_test.py
index c1ccad9ae..aac659c29 100644
--- a/dirsrvtests/tests/suites/plugins/attruniq_test.py
+++ b/dirsrvtests/tests/suites/plugins/attruniq_test.py
@@ -10,6 +10,7 @@ import pytest
import ldap
import logging
from lib389.plugins import AttributeUniquenessPlugin
+from lib389.idm.nscontainer import nsContainers
from lib389.idm.user import UserAccounts
from lib389.idm.group import Groups
from lib389._constants import DEFAULT_SUFFIX
@@ -22,6 +23,19 @@ log = logging.getLogger(__name__)
MAIL_ATTR_VALUE = 'non-uniq@value.net'
MAIL_ATTR_VALUE_ALT = 'alt-mail@value.net'
+EXCLUDED_CONTAINER_CN = "excluded_container"
+EXCLUDED_CONTAINER_DN = "cn={},{}".format(EXCLUDED_CONTAINER_CN, DEFAULT_SUFFIX)
+
+EXCLUDED_BIS_CONTAINER_CN = "excluded_bis_container"
+EXCLUDED_BIS_CONTAINER_DN = "cn={},{}".format(EXCLUDED_BIS_CONTAINER_CN, DEFAULT_SUFFIX)
+
+ENFORCED_CONTAINER_CN = "enforced_container"
+
+USER_1_CN = "test_1"
+USER_2_CN = "test_2"
+USER_3_CN = "test_3"
+USER_4_CN = "test_4"
+
def test_modrdn_attr_uniqueness(topology_st):
"""Test that we can not add two entries that have the same attr value that is
@@ -154,3 +168,160 @@ def test_multiple_attr_uniqueness(topology_st):
testuser2.delete()
attruniq.disable()
attruniq.delete()
+
+
+def test_exclude_subtrees(topology_st):
+ """ Test attribute uniqueness with exclude scope
+
+ :id: 43d29a60-40e1-4ebd-b897-6ef9f20e9f27
+ :setup: Standalone instance
+ :steps:
+ 1. Setup and enable attribute uniqueness plugin for telephonenumber unique attribute
+ 2. Create subtrees and test users
+ 3. Add a unique attribute to a user within uniqueness scope
+ 4. Add exclude subtree
+ 5. Try to add existing value attribute to an entry within uniqueness scope
+ 6. Try to add existing value attribute to an entry within exclude scope
+ 7. Remove the attribute from affected entries
+ 8. Add a unique attribute to a user within exclude scope
+ 9. Try to add existing value attribute to an entry within uniqueness scope
+ 10. Try to add existing value attribute to another entry within uniqueness scope
+ 11. Remove the attribute from affected entries
+ 12. Add another exclude subtree
+ 13. Add a unique attribute to a user within uniqueness scope
+ 14. Try to add existing value attribute to an entry within uniqueness scope
+ 15. Try to add existing value attribute to an entry within exclude scope
+ 16. Try to add existing value attribute to an entry within another exclude scope
+ 17. Clean up entries
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ 5. Should raise CONSTRAINT_VIOLATION
+ 6. Success
+ 7. Success
+ 8. Success
+ 9. Success
+ 10. Should raise CONSTRAINT_VIOLATION
+ 11. Success
+ 12. Success
+ 13. Success
+ 14. Should raise CONSTRAINT_VIOLATION
+ 15. Success
+ 16. Success
+ 17. Success
+ """
+ log.info('Setup attribute uniqueness plugin')
+ attruniq = AttributeUniquenessPlugin(topology_st.standalone, dn="cn=attruniq,cn=plugins,cn=config")
+ attruniq.create(properties={'cn': 'attruniq'})
+ attruniq.add_unique_attribute('telephonenumber')
+ attruniq.add_unique_subtree(DEFAULT_SUFFIX)
+ attruniq.enable_all_subtrees()
+ attruniq.enable()
+ topology_st.standalone.restart()
+
+ log.info('Create subtrees container')
+ containers = nsContainers(topology_st.standalone, DEFAULT_SUFFIX)
+ cont1 = containers.create(properties={'cn': EXCLUDED_CONTAINER_CN})
+ cont2 = containers.create(properties={'cn': EXCLUDED_BIS_CONTAINER_CN})
+ cont3 = containers.create(properties={'cn': ENFORCED_CONTAINER_CN})
+
+ log.info('Create test users')
+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='cn={}'.format(ENFORCED_CONTAINER_CN))
+ users_excluded = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='cn={}'.format(EXCLUDED_CONTAINER_CN))
+ users_excluded2 = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='cn={}'.format(EXCLUDED_BIS_CONTAINER_CN))
+
+ user1 = users.create(properties={'cn': USER_1_CN,
+ 'uid': USER_1_CN,
+ 'sn': USER_1_CN,
+ 'uidNumber': '1',
+ 'gidNumber': '11',
+ 'homeDirectory': '/home/{}'.format(USER_1_CN)})
+ user2 = users.create(properties={'cn': USER_2_CN,
+ 'uid': USER_2_CN,
+ 'sn': USER_2_CN,
+ 'uidNumber': '2',
+ 'gidNumber': '22',
+ 'homeDirectory': '/home/{}'.format(USER_2_CN)})
+ user3 = users_excluded.create(properties={'cn': USER_3_CN,
+ 'uid': USER_3_CN,
+ 'sn': USER_3_CN,
+ 'uidNumber': '3',
+ 'gidNumber': '33',
+ 'homeDirectory': '/home/{}'.format(USER_3_CN)})
+ user4 = users_excluded2.create(properties={'cn': USER_4_CN,
+ 'uid': USER_4_CN,
+ 'sn': USER_4_CN,
+ 'uidNumber': '4',
+ 'gidNumber': '44',
+ 'homeDirectory': '/home/{}'.format(USER_4_CN)})
+
+ UNIQUE_VALUE = '1234'
+
+ try:
+ log.info('Create user with unique attribute')
+ user1.add('telephonenumber', UNIQUE_VALUE)
+ assert user1.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Add exclude subtree')
+ attruniq.add_exclude_subtree(EXCLUDED_CONTAINER_DN)
+ topology_st.standalone.restart()
+
+ log.info('Verify an already used attribute value cannot be added within the same subtree')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user2.add('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify an entry with same attribute value can be added within exclude subtree')
+ user3.add('telephonenumber', UNIQUE_VALUE)
+ assert user3.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Cleanup unique attribute values')
+ user1.remove_all('telephonenumber')
+ user3.remove_all('telephonenumber')
+
+ log.info('Add a unique value to an entry in excluded scope')
+ user3.add('telephonenumber', UNIQUE_VALUE)
+ assert user3.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify the same value can be added to an entry within uniqueness scope')
+ user1.add('telephonenumber', UNIQUE_VALUE)
+ assert user1.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify that yet another same value cannot be added to another entry within uniqueness scope')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user2.add('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Cleanup unique attribute values')
+ user1.remove_all('telephonenumber')
+ user3.remove_all('telephonenumber')
+
+ log.info('Add another exclude subtree')
+ attruniq.add_exclude_subtree(EXCLUDED_BIS_CONTAINER_DN)
+ topology_st.standalone.restart()
+
+ user1.add('telephonenumber', UNIQUE_VALUE)
+ log.info('Verify an already used attribute value cannot be added within the same subtree')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user2.add('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify an already used attribute can be added to an entry in exclude scope')
+ user3.add('telephonenumber', UNIQUE_VALUE)
+ assert user3.present('telephonenumber', UNIQUE_VALUE)
+ user4.add('telephonenumber', UNIQUE_VALUE)
+ assert user4.present('telephonenumber', UNIQUE_VALUE)
+
+ finally:
+ log.info('Clean up users, containers and attribute uniqueness plugin')
+ user1.delete()
+ user2.delete()
+ user3.delete()
+ user4.delete()
+ cont1.delete()
+ cont2.delete()
+ cont3.delete()
+ attruniq.disable()
+ attruniq.delete()
\ No newline at end of file
diff --git a/dirsrvtests/tests/tickets/ticket47927_test.py b/dirsrvtests/tests/tickets/ticket47927_test.py
deleted file mode 100644
index 887fe1af4..000000000
--- a/dirsrvtests/tests/tickets/ticket47927_test.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2016 Red Hat, Inc.
-# All rights reserved.
-#
-# License: GPL (version 3 or any later version).
-# See LICENSE for details.
-# --- END COPYRIGHT BLOCK ---
-#
-import pytest
-from lib389.tasks import *
-from lib389.utils import *
-from lib389.topologies import topology_st
-
-from lib389._constants import SUFFIX, DEFAULT_SUFFIX, PLUGIN_ATTR_UNIQUENESS
-
-# Skip on older versions
-pytestmark = [pytest.mark.tier2,
- pytest.mark.skipif(ds_is_older('1.3.4'), reason="Not implemented")]
-
-logging.getLogger(__name__).setLevel(logging.DEBUG)
-log = logging.getLogger(__name__)
-
-EXCLUDED_CONTAINER_CN = "excluded_container"
-EXCLUDED_CONTAINER_DN = "cn=%s,%s" % (EXCLUDED_CONTAINER_CN, SUFFIX)
-
-EXCLUDED_BIS_CONTAINER_CN = "excluded_bis_container"
-EXCLUDED_BIS_CONTAINER_DN = "cn=%s,%s" % (EXCLUDED_BIS_CONTAINER_CN, SUFFIX)
-
-ENFORCED_CONTAINER_CN = "enforced_container"
-ENFORCED_CONTAINER_DN = "cn=%s,%s" % (ENFORCED_CONTAINER_CN, SUFFIX)
-
-USER_1_CN = "test_1"
-USER_1_DN = "cn=%s,%s" % (USER_1_CN, ENFORCED_CONTAINER_DN)
-USER_2_CN = "test_2"
-USER_2_DN = "cn=%s,%s" % (USER_2_CN, ENFORCED_CONTAINER_DN)
-USER_3_CN = "test_3"
-USER_3_DN = "cn=%s,%s" % (USER_3_CN, EXCLUDED_CONTAINER_DN)
-USER_4_CN = "test_4"
-USER_4_DN = "cn=%s,%s" % (USER_4_CN, EXCLUDED_BIS_CONTAINER_DN)
-
-
-def test_ticket47927_init(topology_st):
- topology_st.standalone.plugins.enable(name=PLUGIN_ATTR_UNIQUENESS)
- try:
- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config',
- [(ldap.MOD_REPLACE, 'uniqueness-attribute-name', b'telephonenumber'),
- (ldap.MOD_REPLACE, 'uniqueness-subtrees', ensure_bytes(DEFAULT_SUFFIX)),
- ])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927: Failed to configure plugin for "telephonenumber": error ' + e.args[0]['desc'])
- assert False
- topology_st.standalone.restart(timeout=120)
-
- topology_st.standalone.add_s(Entry((EXCLUDED_CONTAINER_DN, {'objectclass': "top nscontainer".split(),
- 'cn': EXCLUDED_CONTAINER_CN})))
- topology_st.standalone.add_s(Entry((EXCLUDED_BIS_CONTAINER_DN, {'objectclass': "top nscontainer".split(),
- 'cn': EXCLUDED_BIS_CONTAINER_CN})))
- topology_st.standalone.add_s(Entry((ENFORCED_CONTAINER_DN, {'objectclass': "top nscontainer".split(),
- 'cn': ENFORCED_CONTAINER_CN})))
-
- # adding an entry on a stage with a different 'cn'
- topology_st.standalone.add_s(Entry((USER_1_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_1_CN,
- 'cn': USER_1_CN})))
- # adding an entry on a stage with a different 'cn'
- topology_st.standalone.add_s(Entry((USER_2_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_2_CN,
- 'cn': USER_2_CN})))
- topology_st.standalone.add_s(Entry((USER_3_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_3_CN,
- 'cn': USER_3_CN})))
- topology_st.standalone.add_s(Entry((USER_4_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_4_CN,
- 'cn': USER_4_CN})))
-
-
-def test_ticket47927_one(topology_st):
- '''
- Check that uniqueness is enforce on all SUFFIX
- '''
- UNIQUE_VALUE = b'1234'
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_one: Failed to set the telephonenumber for %s: %s' % (USER_1_DN, e.args[0]['desc']))
- assert False
-
- # we expect to fail because user1 is in the scope of the plugin
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_one: unexpected success to set the telephonenumber for %s' % (USER_2_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_one: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
- pass
-
- # we expect to fail because user1 is in the scope of the plugin
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_one: unexpected success to set the telephonenumber for %s' % (USER_3_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_one: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- pass
-
-
-def test_ticket47927_two(topology_st):
- '''
- Exclude the EXCLUDED_CONTAINER_DN from the uniqueness plugin
- '''
- try:
- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config',
- [(ldap.MOD_REPLACE, 'uniqueness-exclude-subtrees', ensure_bytes(EXCLUDED_CONTAINER_DN))])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_two: Failed to configure plugin for to exclude %s: error %s' % (
- EXCLUDED_CONTAINER_DN, e.args[0]['desc']))
- assert False
- topology_st.standalone.restart(timeout=120)
-
-
-def test_ticket47927_three(topology_st):
- '''
- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN
- First case: it exists an entry (with the same attribute value) in the scope
- of the plugin and we set the value in an entry that is in an excluded scope
- '''
- UNIQUE_VALUE = b'9876'
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_three: Failed to set the telephonenumber ' + e.args[0]['desc'])
- assert False
-
- # we should not be allowed to set this value (because user1 is in the scope)
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_three: unexpected success to set the telephonenumber for %s' % (USER_2_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_three: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
-
- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_three: success to set the telephonenumber for %s' % (USER_3_DN))
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_three: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- assert False
-
-
-def test_ticket47927_four(topology_st):
- '''
- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN
- Second case: it exists an entry (with the same attribute value) in an excluded scope
- of the plugin and we set the value in an entry is in the scope
- '''
- UNIQUE_VALUE = b'1111'
- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_four: success to set the telephonenumber for %s' % USER_3_DN)
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_four: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- assert False
-
- # we should be allowed to set this value (because user3 is excluded from scope)
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal(
- 'test_ticket47927_four: Failed to set the telephonenumber for %s: %s' % (USER_1_DN, e.args[0]['desc']))
- assert False
-
- # we should not be allowed to set this value (because user1 is in the scope)
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_four: unexpected success to set the telephonenumber %s' % USER_2_DN)
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_four: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
- pass
-
-
-def test_ticket47927_five(topology_st):
- '''
- Exclude the EXCLUDED_BIS_CONTAINER_DN from the uniqueness plugin
- '''
- try:
- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config',
- [(ldap.MOD_ADD, 'uniqueness-exclude-subtrees', ensure_bytes(EXCLUDED_BIS_CONTAINER_DN))])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_five: Failed to configure plugin for to exclude %s: error %s' % (
- EXCLUDED_BIS_CONTAINER_DN, e.args[0]['desc']))
- assert False
- topology_st.standalone.restart(timeout=120)
- topology_st.standalone.getEntry('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config', ldap.SCOPE_BASE)
-
-
-def test_ticket47927_six(topology_st):
- '''
- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN
- and EXCLUDED_BIS_CONTAINER_DN
- First case: it exists an entry (with the same attribute value) in the scope
- of the plugin and we set the value in an entry that is in an excluded scope
- '''
- UNIQUE_VALUE = b'222'
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed to set the telephonenumber ' + e.args[0]['desc'])
- assert False
-
- # we should not be allowed to set this value (because user1 is in the scope)
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_six: unexpected success to set the telephonenumber for %s' % (USER_2_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
-
- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_six: success to set the telephonenumber for %s' % (USER_3_DN))
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- assert False
- # USER_4_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_4_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_six: success to set the telephonenumber for %s' % (USER_4_DN))
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_4_DN, e.args[0]['desc']))
- assert False
-
-
-if __name__ == '__main__':
- # Run isolated
- # -s for DEBUG mode
- CURRENT_FILE = os.path.realpath(__file__)
- pytest.main("-s %s" % CURRENT_FILE)
diff --git a/src/lib389/lib389/plugins.py b/src/lib389/lib389/plugins.py
index 31bbfa502..977091726 100644
--- a/src/lib389/lib389/plugins.py
+++ b/src/lib389/lib389/plugins.py
@@ -175,6 +175,16 @@ class AttributeUniquenessPlugin(Plugin):
self.set('uniqueness-across-all-subtrees', 'off')
+ def add_exclude_subtree(self, basedn):
+ """Add a uniqueness-exclude-subtrees attribute"""
+
+ self.add('uniqueness-exclude-subtrees', basedn)
+
+ def remove_exclude_subtree(self, basedn):
+ """Remove a uniqueness-exclude-subtrees attribute"""
+
+ self.remove('uniqueness-exclude-subtrees', basedn)
+
class AttributeUniquenessPlugins(DSLdapObjects):
"""A DSLdapObjects entity which represents Attribute Uniqueness plugin instances
--
2.49.0

View file

@ -0,0 +1,33 @@
From 9adaeba848a5b0fbe5d3a6148736f6c2ae940c35 Mon Sep 17 00:00:00 2001
From: progier389 <progier@redhat.com>
Date: Mon, 5 Jan 2026 14:38:38 +0100
Subject: [PATCH] Issue 7166 - db_config_set asserts because of dynamic list
(#7167)
Avoid assertion in db_config_set when args does not contains dynamic list attributes
Issue: #7166
Reviewed by: @tbordaz (Thanks!)
(cherry picked from commit 5f15223280002803a932187c22b10beaeaa74bc2)
---
src/lib389/lib389/cli_conf/backend.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/lib389/lib389/cli_conf/backend.py b/src/lib389/lib389/cli_conf/backend.py
index 677b37fcb..d0ec4bd9e 100644
--- a/src/lib389/lib389/cli_conf/backend.py
+++ b/src/lib389/lib389/cli_conf/backend.py
@@ -544,7 +544,7 @@ def db_config_set(inst, basedn, log, args):
did_something = False
replace_list = []
- if args.enable_dynamic_lists and args.disable_dynamic_lists:
+ if getattr(args,'enable_dynamic_lists', None) and getattr(args, 'disable_dynamic_lists', None):
raise ValueError("You can not enable and disable dynamic lists at the same time")
for attr, value in list(attrs.items()):
--
2.52.0

View file

@ -1,45 +0,0 @@
From 4fb3a2ea084c1de3ba60ca97a5dd14fc7b8225bd Mon Sep 17 00:00:00 2001
From: Alexander Bokovoy <abokovoy@redhat.com>
Date: Wed, 9 Jul 2025 12:08:09 +0300
Subject: [PATCH] Issue 6857 - uiduniq: allow specifying match rules in the
filter
Allow uniqueness plugin to work with attributes where uniqueness should
be enforced using different matching rule than the one defined for the
attribute itself.
Since uniqueness plugin configuration can contain multiple attributes,
add matching rule right to the attribute as it is used in the LDAP rule
(e.g. 'attribute:caseIgnoreMatch:' to force 'attribute' to be searched
with case-insensitive matching rule instead of the original matching
rule.
Fixes: https://github.com/389ds/389-ds-base/issues/6857
Signed-off-by: Alexander Bokovoy <abokovoy@redhat.com>
---
ldap/servers/plugins/uiduniq/uid.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/ldap/servers/plugins/uiduniq/uid.c b/ldap/servers/plugins/uiduniq/uid.c
index 053af4f9d..887e79d78 100644
--- a/ldap/servers/plugins/uiduniq/uid.c
+++ b/ldap/servers/plugins/uiduniq/uid.c
@@ -1030,7 +1030,14 @@ preop_add(Slapi_PBlock *pb)
}
for (i = 0; attrNames && attrNames[i]; i++) {
+ char *attr_match = strchr(attrNames[i], ':');
+ if (attr_match != NULL) {
+ attr_match[0] = '\0';
+ }
err = slapi_entry_attr_find(e, attrNames[i], &attr);
+ if (attr_match != NULL) {
+ attr_match[0] = ':';
+ }
if (!err) {
/*
* Passed all the requirements - this is an operation we
--
2.49.0

View file

@ -0,0 +1,57 @@
From 7693d5335c498de1dbb783042cc4acec0138e44d Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Mon, 5 Jan 2026 18:32:52 -0800
Subject: [PATCH] Issue 7160 - Add lib389 version sync check to configure
(#7165)
Description: Add version validation during configure that ensures
lib389 version in pyproject.toml matches the main project version
in VERSION.sh. Configure fails with clear error message and fix
instructions when versions mismatch, preventing inconsistent releases.
Fixes: https://github.com/389ds/389-ds-base/issues/7160
Reviewed by: @progier389 (Thanks!)
---
configure.ac | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/configure.ac b/configure.ac
index e94f72647..7fd061a8c 100644
--- a/configure.ac
+++ b/configure.ac
@@ -7,6 +7,31 @@ AC_CONFIG_HEADERS([config.h])
# include the version information
. $srcdir/VERSION.sh
AC_MSG_NOTICE(This is configure for $PACKAGE_TARNAME $PACKAGE_VERSION)
+
+# Validate lib389 version matches main project version
+AC_MSG_CHECKING([lib389 version sync])
+lib389_pyproject="$srcdir/src/lib389/pyproject.toml"
+if test -f "$lib389_pyproject"; then
+ lib389_version=$(grep -E '^version\s*=' "$lib389_pyproject" | sed 's/.*"\(.*\)".*/\1/')
+ if test "x$lib389_version" != "x$RPM_VERSION"; then
+ AC_MSG_RESULT([MISMATCH])
+ AC_MSG_ERROR([
+lib389 version mismatch detected!
+ Main project version (VERSION.sh): $RPM_VERSION
+ lib389 version (pyproject.toml): $lib389_version
+
+To fix this, run:
+ cd $srcdir/src/lib389 && python3 validate_version.py --update
+
+lib389 version MUST match the main project version before release.
+])
+ else
+ AC_MSG_RESULT([ok ($lib389_version)])
+ fi
+else
+ AC_MSG_RESULT([MISSING])
+ AC_MSG_ERROR([lib389 pyproject.toml not found at $lib389_pyproject - source tree is incomplete])
+fi
AM_INIT_AUTOMAKE([1.9 foreign subdir-objects dist-bzip2 no-dist-gzip no-define tar-pax])
AC_SUBST([RPM_VERSION])
AC_SUBST([RPM_RELEASE])
--
2.52.0

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,318 @@
From 5de98cdc10bd333d0695c00d57e137d699f90f1c Mon Sep 17 00:00:00 2001
From: tbordaz <tbordaz@redhat.com>
Date: Wed, 7 Jan 2026 11:21:12 +0100
Subject: [PATCH] Issue 7096 - During replication online total init the
function idl_id_is_in_idlist is not scaling with large database (#7145)
Bug description:
During a online total initialization, the supplier sorts
the candidate list of entries so that the parents are sent before
children entries.
With large DB the ID array used for the sorting is not
scaling. It takes so long to build the candidate list that
the connection gets closed
Fix description:
Instead of using an ID array, uses a list of ID ranges
fixes: #7096
Reviewed by: Mark Reynolds, Pierre Rogier (Thanks !!)
---
ldap/servers/slapd/back-ldbm/back-ldbm.h | 12 ++
ldap/servers/slapd/back-ldbm/idl_common.c | 163 ++++++++++++++++++
ldap/servers/slapd/back-ldbm/idl_new.c | 30 ++--
.../servers/slapd/back-ldbm/proto-back-ldbm.h | 3 +
4 files changed, 189 insertions(+), 19 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h
index 1bc36720d..b187c26bc 100644
--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h
@@ -282,6 +282,18 @@ typedef struct _idlist_set
#define INDIRECT_BLOCK(idl) ((idl)->b_nids == INDBLOCK)
#define IDL_NIDS(idl) (idl ? (idl)->b_nids : (NIDS)0)
+/*
+ * used by the supplier during online total init
+ * it stores the ranges of ID that are already present
+ * in the candidate list ('parentid>=1')
+ */
+typedef struct IdRange {
+ ID first;
+ ID last;
+ struct IdRange *next;
+} IdRange_t;
+
+
typedef size_t idl_iterator;
/* small hashtable implementation used in the entry cache -- the table
diff --git a/ldap/servers/slapd/back-ldbm/idl_common.c b/ldap/servers/slapd/back-ldbm/idl_common.c
index fcb0ece4b..fdc9b4e67 100644
--- a/ldap/servers/slapd/back-ldbm/idl_common.c
+++ b/ldap/servers/slapd/back-ldbm/idl_common.c
@@ -172,6 +172,169 @@ idl_min(IDList *a, IDList *b)
return (a->b_nids > b->b_nids ? b : a);
}
+/*
+ * This is a faster version of idl_id_is_in_idlist.
+ * idl_id_is_in_idlist uses an array of ID so lookup is expensive
+ * idl_id_is_in_idlist_ranges uses a list of ranges of ID lookup is faster
+ * returns
+ * 1: 'id' is present in idrange_list
+ * 0: 'id' is not present in idrange_list
+ */
+int
+idl_id_is_in_idlist_ranges(IDList *idl, IdRange_t *idrange_list, ID id)
+{
+ IdRange_t *range = idrange_list;
+ int found = 0;
+
+ if (NULL == idl || NOID == id) {
+ return 0; /* not in the list */
+ }
+ if (ALLIDS(idl)) {
+ return 1; /* in the list */
+ }
+
+ for(;range; range = range->next) {
+ if (id > range->last) {
+ /* check if it belongs to the next range */
+ continue;
+ }
+ if (id >= range->first) {
+ /* It belongs to that range [first..last ] */
+ found = 1;
+ break;
+ } else {
+ /* this range is after id */
+ break;
+ }
+ }
+ return found;
+}
+
+/* This function is used during the online total initialisation
+ * (see next function)
+ * It frees all ranges of ID in the list
+ */
+void idrange_free(IdRange_t **head)
+{
+ IdRange_t *curr, *sav;
+
+ if ((head == NULL) || (*head == NULL)) {
+ return;
+ }
+ curr = *head;
+ sav = NULL;
+ for (; curr;) {
+ sav = curr;
+ curr = curr->next;
+ slapi_ch_free((void *) &sav);
+ }
+ if (sav) {
+ slapi_ch_free((void *) &sav);
+ }
+ *head = NULL;
+}
+
+/* This function is used during the online total initialisation
+ * Because a MODRDN can move entries under a parent that
+ * has a higher ID we need to sort the IDList so that parents
+ * are sent, to the consumer, before the children are sent.
+ * The sorting with a simple IDlist does not scale instead
+ * a list of IDs ranges is much faster.
+ * In that list we only ADD/lookup ID.
+ */
+IdRange_t *idrange_add_id(IdRange_t **head, ID id)
+{
+ if (head == NULL) {
+ slapi_log_err(SLAPI_LOG_ERR, "idrange_add_id",
+ "Can not add ID %d in non defined list\n", id);
+ return NULL;
+ }
+
+ if (*head == NULL) {
+ /* This is the first range */
+ IdRange_t *new_range = (IdRange_t *)slapi_ch_malloc(sizeof(IdRange_t));
+ new_range->first = id;
+ new_range->last = id;
+ new_range->next = NULL;
+ *head = new_range;
+ return *head;
+ }
+
+ IdRange_t *curr = *head, *prev = NULL;
+
+ /* First, find if id already falls within any existing range, or it is adjacent to any */
+ while (curr) {
+ if (id >= curr->first && id <= curr->last) {
+ /* inside a range, nothing to do */
+ return curr;
+ }
+
+ if (id == curr->last + 1) {
+ /* Extend this range upwards */
+ curr->last = id;
+
+ /* Check for possible merge with next range */
+ IdRange_t *next = curr->next;
+ if (next && curr->last + 1 >= next->first) {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) merge current with next range [%d..%d]\n", id, curr->first, curr->last);
+ curr->last = (next->last > curr->last) ? next->last : curr->last;
+ curr->next = next->next;
+ slapi_ch_free((void*) &next);
+ } else {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) extend forward current range [%d..%d]\n", id, curr->first, curr->last);
+ }
+ return curr;
+ }
+
+ if (id + 1 == curr->first) {
+ /* Extend this range downwards */
+ curr->first = id;
+
+ /* Check for possible merge with previous range */
+ if (prev && prev->last + 1 >= curr->first) {
+ prev->last = curr->last;
+ prev->next = curr->next;
+ slapi_ch_free((void *) &curr);
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) merge current with previous range [%d..%d]\n", id, prev->first, prev->last);
+ return prev;
+ } else {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) extend backward current range [%d..%d]\n", id, curr->first, curr->last);
+ return curr;
+ }
+ }
+
+ /* If id is before the current range, break so we can insert before */
+ if (id < curr->first) {
+ break;
+ }
+
+ prev = curr;
+ curr = curr->next;
+ }
+ /* Need to insert a new standalone IdRange */
+ IdRange_t *new_range = (IdRange_t *)slapi_ch_malloc(sizeof(IdRange_t));
+ new_range->first = id;
+ new_range->last = id;
+ new_range->next = curr;
+
+ if (prev) {
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) add new range [%d..%d]\n", id, new_range->first, new_range->last);
+ prev->next = new_range;
+ } else {
+ /* Insert at head */
+ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id",
+ "(id=%d) head range [%d..%d]\n", id, new_range->first, new_range->last);
+ *head = new_range;
+ }
+ return *head;
+}
+
+
int
idl_id_is_in_idlist(IDList *idl, ID id)
{
diff --git a/ldap/servers/slapd/back-ldbm/idl_new.c b/ldap/servers/slapd/back-ldbm/idl_new.c
index 5fbcaff2e..2d978353f 100644
--- a/ldap/servers/slapd/back-ldbm/idl_new.c
+++ b/ldap/servers/slapd/back-ldbm/idl_new.c
@@ -417,7 +417,6 @@ idl_new_range_fetch(
{
int ret = 0;
int ret2 = 0;
- int idl_rc = 0;
dbi_cursor_t cursor = {0};
IDList *idl = NULL;
dbi_val_t cur_key = {0};
@@ -436,6 +435,7 @@ idl_new_range_fetch(
size_t leftoverlen = 32;
size_t leftovercnt = 0;
char *index_id = get_index_name(be, db, ai);
+ IdRange_t *idrange_list = NULL;
if (NULL == flag_err) {
@@ -578,10 +578,12 @@ idl_new_range_fetch(
* found entry is the one from the suffix
*/
suffix = key;
- idl_rc = idl_append_extend(&idl, id);
- } else if ((key == suffix) || idl_id_is_in_idlist(idl, key)) {
+ idl_append_extend(&idl, id);
+ idrange_add_id(&idrange_list, id);
+ } else if ((key == suffix) || idl_id_is_in_idlist_ranges(idl, idrange_list, key)) {
/* the parent is the suffix or already in idl. */
- idl_rc = idl_append_extend(&idl, id);
+ idl_append_extend(&idl, id);
+ idrange_add_id(&idrange_list, id);
} else {
/* Otherwise, keep the {key,id} in leftover array */
if (!leftover) {
@@ -596,13 +598,7 @@ idl_new_range_fetch(
leftovercnt++;
}
} else {
- idl_rc = idl_append_extend(&idl, id);
- }
- if (idl_rc) {
- slapi_log_err(SLAPI_LOG_ERR, "idl_new_range_fetch",
- "Unable to extend id list (err=%d)\n", idl_rc);
- idl_free(&idl);
- goto error;
+ idl_append_extend(&idl, id);
}
count++;
@@ -695,21 +691,17 @@ error:
while(remaining > 0) {
for (size_t i = 0; i < leftovercnt; i++) {
- if (leftover[i].key > 0 && idl_id_is_in_idlist(idl, leftover[i].key) != 0) {
+ if (leftover[i].key > 0 && idl_id_is_in_idlist_ranges(idl, idrange_list, leftover[i].key) != 0) {
/* if the leftover key has its parent in the idl */
- idl_rc = idl_append_extend(&idl, leftover[i].id);
- if (idl_rc) {
- slapi_log_err(SLAPI_LOG_ERR, "idl_new_range_fetch",
- "Unable to extend id list (err=%d)\n", idl_rc);
- idl_free(&idl);
- return NULL;
- }
+ idl_append_extend(&idl, leftover[i].id);
+ idrange_add_id(&idrange_list, leftover[i].id);
leftover[i].key = 0;
remaining--;
}
}
}
slapi_ch_free((void **)&leftover);
+ idrange_free(&idrange_list);
}
slapi_log_err(SLAPI_LOG_FILTER, "idl_new_range_fetch",
"Found %d candidates; error code is: %d\n",
diff --git a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
index 91d61098a..30a7aa11f 100644
--- a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
@@ -217,6 +217,9 @@ ID idl_firstid(IDList *idl);
ID idl_nextid(IDList *idl, ID id);
int idl_init_private(backend *be, struct attrinfo *a);
int idl_release_private(struct attrinfo *a);
+IdRange_t *idrange_add_id(IdRange_t **head, ID id);
+void idrange_free(IdRange_t **head);
+int idl_id_is_in_idlist_ranges(IDList *idl, IdRange_t *idrange_list, ID id);
int idl_id_is_in_idlist(IDList *idl, ID id);
idl_iterator idl_iterator_init(const IDList *idl);
--
2.52.0

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,765 @@
From 6f3bf5a48d504646751be9e91293487eec972ed8 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 7 Jan 2026 16:55:27 -0500
Subject: [PATCH] Issue - Revise paged result search locking
Description:
Move to a single lock approach verses having two locks. This will impact
concurrency when multiple async paged result searches are done on the same
connection, but it simplifies the code and avoids race conditions and
deadlocks.
Relates: https://github.com/389ds/389-ds-base/issues/7118
Reviewed by: progier & tbordaz (Thanks!!)
---
ldap/servers/slapd/abandon.c | 2 +-
ldap/servers/slapd/opshared.c | 60 ++++----
ldap/servers/slapd/pagedresults.c | 228 +++++++++++++++++++-----------
ldap/servers/slapd/proto-slap.h | 26 ++--
ldap/servers/slapd/slap.h | 5 +-
5 files changed, 187 insertions(+), 134 deletions(-)
diff --git a/ldap/servers/slapd/abandon.c b/ldap/servers/slapd/abandon.c
index 6024fcd31..1f47c531c 100644
--- a/ldap/servers/slapd/abandon.c
+++ b/ldap/servers/slapd/abandon.c
@@ -179,7 +179,7 @@ do_abandon(Slapi_PBlock *pb)
logpb.tv_sec = -1;
logpb.tv_nsec = -1;
- if (0 == pagedresults_free_one_msgid(pb_conn, id, pageresult_lock_get_addr(pb_conn))) {
+ if (0 == pagedresults_free_one_msgid(pb_conn, id, PR_NOT_LOCKED)) {
if (log_format != LOG_FORMAT_DEFAULT) {
/* JSON logging */
logpb.target_op = "Simple Paged Results";
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index a5cddfd23..bf800f7dc 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -572,8 +572,8 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
be = be_list[index];
}
}
- pr_search_result = pagedresults_get_search_result(pb_conn, operation, 0 /*not locked*/, pr_idx);
- estimate = pagedresults_get_search_result_set_size_estimate(pb_conn, operation, pr_idx);
+ pr_search_result = pagedresults_get_search_result(pb_conn, operation, PR_NOT_LOCKED, pr_idx);
+ estimate = pagedresults_get_search_result_set_size_estimate(pb_conn, operation, PR_NOT_LOCKED, pr_idx);
/* Set operation note flags as required. */
if (pagedresults_get_unindexed(pb_conn, operation, pr_idx)) {
slapi_pblock_set_flag_operation_notes(pb, SLAPI_OP_NOTE_UNINDEXED);
@@ -619,14 +619,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
int32_t tlimit;
slapi_pblock_get(pb, SLAPI_SEARCH_TIMELIMIT, &tlimit);
pagedresults_set_timelimit(pb_conn, operation, (time_t)tlimit, pr_idx);
- /* When using this mutex in conjunction with the main paged
- * result lock, you must do so in this order:
- *
- * --> pagedresults_lock()
- * --> pagedresults_mutex
- * <-- pagedresults_mutex
- * <-- pagedresults_unlock()
- */
+ /* IMPORTANT: Never acquire pagedresults_mutex when holding c_mutex. */
pagedresults_mutex = pageresult_lock_get_addr(pb_conn);
}
@@ -743,17 +736,15 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
if (op_is_pagedresults(operation) && pr_search_result) {
void *sr = NULL;
/* PAGED RESULTS and already have the search results from the prev op */
- pagedresults_lock(pb_conn, pr_idx);
/*
* In async paged result case, the search result might be released
* by other theads. We need to double check it in the locked region.
*/
pthread_mutex_lock(pagedresults_mutex);
- pr_search_result = pagedresults_get_search_result(pb_conn, operation, 1 /*locked*/, pr_idx);
+ pr_search_result = pagedresults_get_search_result(pb_conn, operation, PR_LOCKED, pr_idx);
if (pr_search_result) {
- if (pagedresults_is_abandoned_or_notavailable(pb_conn, 1 /*locked*/, pr_idx)) {
+ if (pagedresults_is_abandoned_or_notavailable(pb_conn, PR_LOCKED, pr_idx)) {
pthread_mutex_unlock(pagedresults_mutex);
- pagedresults_unlock(pb_conn, pr_idx);
/* Previous operation was abandoned and the simplepaged object is not in use. */
send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL);
rc = LDAP_SUCCESS;
@@ -764,14 +755,13 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
/* search result could be reset in the backend/dse */
slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET, &sr);
- pagedresults_set_search_result(pb_conn, operation, sr, 1 /*locked*/, pr_idx);
+ pagedresults_set_search_result(pb_conn, operation, sr, PR_LOCKED, pr_idx);
}
} else {
pr_stat = PAGEDRESULTS_SEARCH_END;
rc = LDAP_SUCCESS;
}
pthread_mutex_unlock(pagedresults_mutex);
- pagedresults_unlock(pb_conn, pr_idx);
if ((PAGEDRESULTS_SEARCH_END == pr_stat) || (0 == pnentries)) {
/* no more entries to send in the backend */
@@ -789,22 +779,22 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
}
pagedresults_set_response_control(pb, 0, estimate,
curr_search_count, pr_idx);
- if (pagedresults_get_with_sort(pb_conn, operation, pr_idx)) {
+ if (pagedresults_get_with_sort(pb_conn, operation, PR_NOT_LOCKED, pr_idx)) {
sort_make_sort_response_control(pb, CONN_GET_SORT_RESULT_CODE, NULL);
}
pagedresults_set_search_result_set_size_estimate(pb_conn,
operation,
- estimate, pr_idx);
+ estimate, PR_NOT_LOCKED, pr_idx);
if (PAGEDRESULTS_SEARCH_END == pr_stat) {
- pagedresults_lock(pb_conn, pr_idx);
+ pthread_mutex_lock(pagedresults_mutex);
slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, NULL);
- if (!pagedresults_is_abandoned_or_notavailable(pb_conn, 0 /*not locked*/, pr_idx)) {
- pagedresults_free_one(pb_conn, operation, pr_idx);
+ if (!pagedresults_is_abandoned_or_notavailable(pb_conn, PR_LOCKED, pr_idx)) {
+ pagedresults_free_one(pb_conn, operation, PR_LOCKED, pr_idx);
}
- pagedresults_unlock(pb_conn, pr_idx);
+ pthread_mutex_unlock(pagedresults_mutex);
if (next_be) {
/* no more entries, but at least another backend */
- if (pagedresults_set_current_be(pb_conn, next_be, pr_idx, 0) < 0) {
+ if (pagedresults_set_current_be(pb_conn, next_be, pr_idx, PR_NOT_LOCKED) < 0) {
goto free_and_return;
}
}
@@ -915,7 +905,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
}
}
pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx);
- rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, 1);
+ rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, PR_LOCKED);
pthread_mutex_unlock(pagedresults_mutex);
#pragma GCC diagnostic pop
}
@@ -954,7 +944,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
pthread_mutex_lock(pagedresults_mutex);
pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx);
be->be_search_results_release(&sr);
- rc = pagedresults_set_current_be(pb_conn, next_be, pr_idx, 1);
+ rc = pagedresults_set_current_be(pb_conn, next_be, pr_idx, PR_LOCKED);
pthread_mutex_unlock(pagedresults_mutex);
pr_stat = PAGEDRESULTS_SEARCH_END; /* make sure stat is SEARCH_END */
if (NULL == next_be) {
@@ -967,23 +957,23 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
} else {
curr_search_count = pnentries;
slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET_SIZE_ESTIMATE, &estimate);
- pagedresults_lock(pb_conn, pr_idx);
- if ((pagedresults_set_current_be(pb_conn, be, pr_idx, 0) < 0) ||
- (pagedresults_set_search_result(pb_conn, operation, sr, 0, pr_idx) < 0) ||
- (pagedresults_set_search_result_count(pb_conn, operation, curr_search_count, pr_idx) < 0) ||
- (pagedresults_set_search_result_set_size_estimate(pb_conn, operation, estimate, pr_idx) < 0) ||
- (pagedresults_set_with_sort(pb_conn, operation, with_sort, pr_idx) < 0)) {
- pagedresults_unlock(pb_conn, pr_idx);
+ pthread_mutex_lock(pagedresults_mutex);
+ if ((pagedresults_set_current_be(pb_conn, be, pr_idx, PR_LOCKED) < 0) ||
+ (pagedresults_set_search_result(pb_conn, operation, sr, PR_LOCKED, pr_idx) < 0) ||
+ (pagedresults_set_search_result_count(pb_conn, operation, curr_search_count, PR_LOCKED, pr_idx) < 0) ||
+ (pagedresults_set_search_result_set_size_estimate(pb_conn, operation, estimate, PR_LOCKED, pr_idx) < 0) ||
+ (pagedresults_set_with_sort(pb_conn, operation, with_sort, PR_LOCKED, pr_idx) < 0)) {
+ pthread_mutex_unlock(pagedresults_mutex);
cache_return_target_entry(pb, be, operation);
goto free_and_return;
}
- pagedresults_unlock(pb_conn, pr_idx);
+ pthread_mutex_unlock(pagedresults_mutex);
}
slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, NULL);
next_be = NULL; /* to break the loop */
if (operation->o_status & SLAPI_OP_STATUS_ABANDONED) {
/* It turned out this search was abandoned. */
- pagedresults_free_one_msgid(pb_conn, operation->o_msgid, pagedresults_mutex);
+ pagedresults_free_one_msgid(pb_conn, operation->o_msgid, PR_NOT_LOCKED);
/* paged-results-request was abandoned; making an empty cookie. */
pagedresults_set_response_control(pb, 0, estimate, -1, pr_idx);
send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL);
@@ -993,7 +983,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
}
pagedresults_set_response_control(pb, 0, estimate, curr_search_count, pr_idx);
if (curr_search_count == -1) {
- pagedresults_free_one(pb_conn, operation, pr_idx);
+ pagedresults_free_one(pb_conn, operation, PR_NOT_LOCKED, pr_idx);
}
}
diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c
index 941ab97e3..0d6c4a1aa 100644
--- a/ldap/servers/slapd/pagedresults.c
+++ b/ldap/servers/slapd/pagedresults.c
@@ -34,9 +34,9 @@ pageresult_lock_cleanup()
slapi_ch_free((void**)&lock_hash);
}
-/* Beware to the lock order with c_mutex:
- * c_mutex is sometime locked while holding pageresult_lock
- * ==> Do not lock pageresult_lock when holing c_mutex
+/* Lock ordering constraint with c_mutex:
+ * c_mutex is sometimes locked while holding pageresult_lock.
+ * Therefore: DO NOT acquire pageresult_lock when holding c_mutex.
*/
pthread_mutex_t *
pageresult_lock_get_addr(Connection *conn)
@@ -44,7 +44,11 @@ pageresult_lock_get_addr(Connection *conn)
return &lock_hash[(((size_t)conn)/sizeof (Connection))%LOCK_HASH_SIZE];
}
-/* helper function to clean up one prp slot */
+/* helper function to clean up one prp slot
+ *
+ * NOTE: This function must be called while holding the pageresult_lock
+ * (via pageresult_lock_get_addr(conn)) to ensure thread-safe cleanup.
+ */
static void
_pr_cleanup_one_slot(PagedResults *prp)
{
@@ -56,7 +60,7 @@ _pr_cleanup_one_slot(PagedResults *prp)
prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set));
}
- /* clean up the slot except the mutex */
+ /* clean up the slot */
prp->pr_current_be = NULL;
prp->pr_search_result_set = NULL;
prp->pr_search_result_count = 0;
@@ -136,6 +140,8 @@ pagedresults_parse_control_value(Slapi_PBlock *pb,
return LDAP_UNWILLING_TO_PERFORM;
}
+ /* Acquire hash-based lock for paged results list access
+ * IMPORTANT: Never acquire this lock when holding c_mutex */
pthread_mutex_lock(pageresult_lock_get_addr(conn));
/* the ber encoding is no longer needed */
ber_free(ber, 1);
@@ -184,10 +190,6 @@ pagedresults_parse_control_value(Slapi_PBlock *pb,
goto bail;
}
- if ((*index > -1) && (*index < conn->c_pagedresults.prl_maxlen) &&
- !conn->c_pagedresults.prl_list[*index].pr_mutex) {
- conn->c_pagedresults.prl_list[*index].pr_mutex = PR_NewLock();
- }
conn->c_pagedresults.prl_count++;
} else {
/* Repeated paged results request.
@@ -327,8 +329,14 @@ bailout:
"<= idx=%d\n", index);
}
+/*
+ * Free one paged result entry by index.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_free_one(Connection *conn, Operation *op, int index)
+pagedresults_free_one(Connection *conn, Operation *op, bool locked, int index)
{
int rc = -1;
@@ -338,7 +346,9 @@ pagedresults_free_one(Connection *conn, Operation *op, int index)
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one",
"=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (conn->c_pagedresults.prl_count <= 0) {
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one",
"conn=%" PRIu64 " paged requests list count is %d\n",
@@ -349,7 +359,9 @@ pagedresults_free_one(Connection *conn, Operation *op, int index)
conn->c_pagedresults.prl_count--;
rc = 0;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one", "<= %d\n", rc);
@@ -357,21 +369,28 @@ pagedresults_free_one(Connection *conn, Operation *op, int index)
}
/*
- * Used for abandoning - pageresult_lock_get_addr(conn) is already locked in do_abandone.
+ * Free one paged result entry by message ID.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
*/
int
-pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *mutex)
+pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, bool locked)
{
int rc = -1;
int i;
+ pthread_mutex_t *lock = NULL;
if (conn && (msgid > -1)) {
if (conn->c_pagedresults.prl_maxlen <= 0) {
; /* Not a paged result. */
} else {
slapi_log_err(SLAPI_LOG_TRACE,
- "pagedresults_free_one_msgid_nolock", "=> msgid=%d\n", msgid);
- pthread_mutex_lock(mutex);
+ "pagedresults_free_one_msgid", "=> msgid=%d\n", msgid);
+ lock = pageresult_lock_get_addr(conn);
+ if (!locked) {
+ pthread_mutex_lock(lock);
+ }
for (i = 0; i < conn->c_pagedresults.prl_maxlen; i++) {
if (conn->c_pagedresults.prl_list[i].pr_msgid == msgid) {
PagedResults *prp = conn->c_pagedresults.prl_list + i;
@@ -390,9 +409,11 @@ pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *
break;
}
}
- pthread_mutex_unlock(mutex);
+ if (!locked) {
+ pthread_mutex_unlock(lock);
+ }
slapi_log_err(SLAPI_LOG_TRACE,
- "pagedresults_free_one_msgid_nolock", "<= %d\n", rc);
+ "pagedresults_free_one_msgid", "<= %d\n", rc);
}
}
@@ -418,29 +439,43 @@ pagedresults_get_current_be(Connection *conn, int index)
return be;
}
+/*
+ * Set current backend for a paged result entry.
+ *
+ * Locking: If locked=false, acquires pageresult_lock. If locked=true, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, int nolock)
+pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, bool locked)
{
int rc = -1;
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_current_be", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- if (!nolock)
+ if (!locked) {
pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
conn->c_pagedresults.prl_list[index].pr_current_be = be;
}
rc = 0;
- if (!nolock)
+ if (!locked) {
pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_current_be", "<= %d\n", rc);
return rc;
}
+/*
+ * Get search result set for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
void *
-pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int index)
+pagedresults_get_search_result(Connection *conn, Operation *op, bool locked, int index)
{
void *sr = NULL;
if (!op_is_pagedresults(op)) {
@@ -465,8 +500,14 @@ pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int
return sr;
}
+/*
+ * Set search result set for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int locked, int index)
+pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, bool locked, int index)
{
int rc = -1;
if (!op_is_pagedresults(op)) {
@@ -494,8 +535,14 @@ pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int lo
return rc;
}
+/*
+ * Get search result count for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_get_search_result_count(Connection *conn, Operation *op, int index)
+pagedresults_get_search_result_count(Connection *conn, Operation *op, bool locked, int index)
{
int count = 0;
if (!op_is_pagedresults(op)) {
@@ -504,19 +551,29 @@ pagedresults_get_search_result_count(Connection *conn, Operation *op, int index)
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_search_result_count", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
count = conn->c_pagedresults.prl_list[index].pr_search_result_count;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_search_result_count", "<= %d\n", count);
return count;
}
+/*
+ * Set search result count for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, int index)
+pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, bool locked, int index)
{
int rc = -1;
if (!op_is_pagedresults(op)) {
@@ -525,11 +582,15 @@ pagedresults_set_search_result_count(Connection *conn, Operation *op, int count,
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_search_result_count", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
conn->c_pagedresults.prl_list[index].pr_search_result_count = count;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
rc = 0;
}
slapi_log_err(SLAPI_LOG_TRACE,
@@ -537,9 +598,16 @@ pagedresults_set_search_result_count(Connection *conn, Operation *op, int count,
return rc;
}
+/*
+ * Get search result set size estimate for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
pagedresults_get_search_result_set_size_estimate(Connection *conn,
Operation *op,
+ bool locked,
int index)
{
int count = 0;
@@ -550,11 +618,15 @@ pagedresults_get_search_result_set_size_estimate(Connection *conn,
"pagedresults_get_search_result_set_size_estimate",
"=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
count = conn->c_pagedresults.prl_list[index].pr_search_result_set_size_estimate;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_search_result_set_size_estimate", "<= %d\n",
@@ -562,10 +634,17 @@ pagedresults_get_search_result_set_size_estimate(Connection *conn,
return count;
}
+/*
+ * Set search result set size estimate for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
pagedresults_set_search_result_set_size_estimate(Connection *conn,
Operation *op,
int count,
+ bool locked,
int index)
{
int rc = -1;
@@ -576,11 +655,15 @@ pagedresults_set_search_result_set_size_estimate(Connection *conn,
"pagedresults_set_search_result_set_size_estimate",
"=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
conn->c_pagedresults.prl_list[index].pr_search_result_set_size_estimate = count;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
rc = 0;
}
slapi_log_err(SLAPI_LOG_TRACE,
@@ -589,8 +672,14 @@ pagedresults_set_search_result_set_size_estimate(Connection *conn,
return rc;
}
+/*
+ * Get with_sort flag for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_get_with_sort(Connection *conn, Operation *op, int index)
+pagedresults_get_with_sort(Connection *conn, Operation *op, bool locked, int index)
{
int flags = 0;
if (!op_is_pagedresults(op)) {
@@ -599,19 +688,29 @@ pagedresults_get_with_sort(Connection *conn, Operation *op, int index)
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_with_sort", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
flags = conn->c_pagedresults.prl_list[index].pr_flags & CONN_FLAG_PAGEDRESULTS_WITH_SORT;
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
}
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_get_with_sort", "<= %d\n", flags);
return flags;
}
+/*
+ * Set with_sort flag for a paged result entry.
+ *
+ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes
+ * caller already holds pageresult_lock. Never call when holding c_mutex.
+ */
int
-pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index)
+pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, bool locked, int index)
{
int rc = -1;
if (!op_is_pagedresults(op)) {
@@ -620,14 +719,18 @@ pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index
slapi_log_err(SLAPI_LOG_TRACE,
"pagedresults_set_with_sort", "=> idx=%d\n", index);
if (conn && (index > -1)) {
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_lock(pageresult_lock_get_addr(conn));
+ }
if (index < conn->c_pagedresults.prl_maxlen) {
if (flags & OP_FLAG_SERVER_SIDE_SORTING) {
conn->c_pagedresults.prl_list[index].pr_flags |=
CONN_FLAG_PAGEDRESULTS_WITH_SORT;
}
}
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ if (!locked) {
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
+ }
rc = 0;
}
slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_set_with_sort", "<= %d\n", rc);
@@ -802,10 +905,6 @@ pagedresults_cleanup(Connection *conn, int needlock)
rc = 1;
}
prp->pr_current_be = NULL;
- if (prp->pr_mutex) {
- PR_DestroyLock(prp->pr_mutex);
- prp->pr_mutex = NULL;
- }
memset(prp, '\0', sizeof(PagedResults));
}
conn->c_pagedresults.prl_count = 0;
@@ -840,10 +939,6 @@ pagedresults_cleanup_all(Connection *conn, int needlock)
i < conn->c_pagedresults.prl_maxlen;
i++) {
prp = conn->c_pagedresults.prl_list + i;
- if (prp->pr_mutex) {
- PR_DestroyLock(prp->pr_mutex);
- prp->pr_mutex = NULL;
- }
if (prp->pr_current_be && prp->pr_search_result_set &&
prp->pr_current_be->be_search_results_release) {
prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set));
@@ -1010,43 +1105,8 @@ op_set_pagedresults(Operation *op)
op->o_flags |= OP_FLAG_PAGED_RESULTS;
}
-/*
- * pagedresults_lock/unlock -- introduced to protect search results for the
- * asynchronous searches. Do not call these functions while the PR conn lock
- * is held (e.g. pageresult_lock_get_addr(conn))
- */
-void
-pagedresults_lock(Connection *conn, int index)
-{
- PagedResults *prp;
- if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) {
- return;
- }
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
- prp = conn->c_pagedresults.prl_list + index;
- if (prp->pr_mutex) {
- PR_Lock(prp->pr_mutex);
- }
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
-}
-
-void
-pagedresults_unlock(Connection *conn, int index)
-{
- PagedResults *prp;
- if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) {
- return;
- }
- pthread_mutex_lock(pageresult_lock_get_addr(conn));
- prp = conn->c_pagedresults.prl_list + index;
- if (prp->pr_mutex) {
- PR_Unlock(prp->pr_mutex);
- }
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
-}
-
int
-pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index)
+pagedresults_is_abandoned_or_notavailable(Connection *conn, bool locked, int index)
{
PagedResults *prp;
int32_t result;
@@ -1066,7 +1126,7 @@ pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int inde
}
int
-pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, int locked)
+pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, bool locked)
{
int rc = -1;
Connection *conn = NULL;
diff --git a/ldap/servers/slapd/proto-slap.h b/ldap/servers/slapd/proto-slap.h
index 6af6583a5..8445670d2 100644
--- a/ldap/servers/slapd/proto-slap.h
+++ b/ldap/servers/slapd/proto-slap.h
@@ -1611,20 +1611,22 @@ pthread_mutex_t *pageresult_lock_get_addr(Connection *conn);
int pagedresults_parse_control_value(Slapi_PBlock *pb, struct berval *psbvp, ber_int_t *pagesize, int *index, Slapi_Backend *be);
void pagedresults_set_response_control(Slapi_PBlock *pb, int iscritical, ber_int_t estimate, int curr_search_count, int index);
Slapi_Backend *pagedresults_get_current_be(Connection *conn, int index);
-int pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, int nolock);
-void *pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int index);
-int pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int locked, int index);
-int pagedresults_get_search_result_count(Connection *conn, Operation *op, int index);
-int pagedresults_set_search_result_count(Connection *conn, Operation *op, int cnt, int index);
+int pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, bool locked);
+void *pagedresults_get_search_result(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, bool locked, int index);
+int pagedresults_get_search_result_count(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_set_search_result_count(Connection *conn, Operation *op, int cnt, bool locked, int index);
int pagedresults_get_search_result_set_size_estimate(Connection *conn,
Operation *op,
+ bool locked,
int index);
int pagedresults_set_search_result_set_size_estimate(Connection *conn,
Operation *op,
int cnt,
+ bool locked,
int index);
-int pagedresults_get_with_sort(Connection *conn, Operation *op, int index);
-int pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index);
+int pagedresults_get_with_sort(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, bool locked, int index);
int pagedresults_get_unindexed(Connection *conn, Operation *op, int index);
int pagedresults_set_unindexed(Connection *conn, Operation *op, int index);
int pagedresults_get_sort_result_code(Connection *conn, Operation *op, int index);
@@ -1636,15 +1638,13 @@ int pagedresults_cleanup(Connection *conn, int needlock);
int pagedresults_is_timedout_nolock(Connection *conn);
int pagedresults_reset_timedout_nolock(Connection *conn);
int pagedresults_in_use_nolock(Connection *conn);
-int pagedresults_free_one(Connection *conn, Operation *op, int index);
-int pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *mutex);
+int pagedresults_free_one(Connection *conn, Operation *op, bool locked, int index);
+int pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, bool locked);
int op_is_pagedresults(Operation *op);
int pagedresults_cleanup_all(Connection *conn, int needlock);
void op_set_pagedresults(Operation *op);
-void pagedresults_lock(Connection *conn, int index);
-void pagedresults_unlock(Connection *conn, int index);
-int pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index);
-int pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, int locked);
+int pagedresults_is_abandoned_or_notavailable(Connection *conn, bool locked, int index);
+int pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, bool locked);
/*
* sort.c
diff --git a/ldap/servers/slapd/slap.h b/ldap/servers/slapd/slap.h
index 49cfb4210..abb0d2e47 100644
--- a/ldap/servers/slapd/slap.h
+++ b/ldap/servers/slapd/slap.h
@@ -89,6 +89,10 @@ static char ptokPBE[34] = "Internal (Software) Token ";
#include <stdbool.h>
#include <time.h> /* For timespec definitions */
+/* Macros for paged results lock parameter */
+#define PR_LOCKED true
+#define PR_NOT_LOCKED false
+
/* Provides our int types and platform specific requirements. */
#include <slapi_pal.h>
@@ -1669,7 +1673,6 @@ typedef struct _paged_results
struct timespec pr_timelimit_hr; /* expiry time of this request rel to clock monotonic */
int pr_flags;
ber_int_t pr_msgid; /* msgid of the request; to abandon */
- PRLock *pr_mutex; /* protect each conn structure */
} PagedResults;
/* array of simple paged structure stashed in connection */
--
2.52.0

View file

@ -1,65 +0,0 @@
From 9d851a63c9f714ba896a90119560246bf49a433c Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 7 Jul 2025 23:11:17 +0200
Subject: [PATCH] Issue 6850 - AddressSanitizer: memory leak in mdb_init
Bug Description:
`dbmdb_componentid` can be allocated multiple times. To avoid a memory
leak, allocate it only once, and free at the cleanup.
Fixes: https://github.com/389ds/389-ds-base/issues/6850
Reviewed by: @mreynolds389, @tbordaz (Tnanks!)
---
ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c | 4 +++-
ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c | 2 +-
ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c | 5 +++++
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
index 447f3c70a..54ca03b0b 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
@@ -146,7 +146,9 @@ dbmdb_compute_limits(struct ldbminfo *li)
int mdb_init(struct ldbminfo *li, config_info *config_array)
{
dbmdb_ctx_t *conf = (dbmdb_ctx_t *)slapi_ch_calloc(1, sizeof(dbmdb_ctx_t));
- dbmdb_componentid = generate_componentid(NULL, "db-mdb");
+ if (dbmdb_componentid == NULL) {
+ dbmdb_componentid = generate_componentid(NULL, "db-mdb");
+ }
li->li_dblayer_config = conf;
strncpy(conf->home, li->li_directory, MAXPATHLEN-1);
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
index c4e87987f..ed17f979f 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
@@ -19,7 +19,7 @@
#include <prclist.h>
#include <glob.h>
-Slapi_ComponentId *dbmdb_componentid;
+Slapi_ComponentId *dbmdb_componentid = NULL;
#define BULKOP_MAX_RECORDS 100 /* Max records handled by a single bulk operations */
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c
index 2d07db9b5..ae10ac7cf 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c
@@ -49,6 +49,11 @@ dbmdb_cleanup(struct ldbminfo *li)
}
slapi_ch_free((void **)&(li->li_dblayer_config));
+ if (dbmdb_componentid != NULL) {
+ release_componentid(dbmdb_componentid);
+ dbmdb_componentid = NULL;
+ }
+
return 0;
}
--
2.49.0

View file

@ -0,0 +1,183 @@
From cde999edf7246d9dcec4a13950e2c0895165a16e Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Thu, 8 Jan 2026 10:02:39 -0800
Subject: [PATCH] Issue 7108 - Fix shutdown crash in entry cache destruction
(#7163)
Description: The entry cache could experience LRU list corruption when
using pinned entries, leading to crashes during cache flush operations.
In entrycache_add_int(), when returning an existing cached entry, the
code checked the wrong entry's state before calling lru_delete(). It
checked the new entry 'e' but operated on the existing entry 'my_alt',
causing lru_delete() to be called on entries not in the LRU list. This
is fixed by checking my_alt's refcnt and pinned state instead.
In flush_hash(), pinned_remove() and lru_delete() were both called on
pinned entries. Since pinned entries are in the pinned list, calling
lru_delete() afterwards corrupted the list. This is fixed by calling
either pinned_remove() or lru_delete() based on the entry's state.
A NULL check is added in entrycache_flush() and dncache_flush() to
gracefully handle corrupted LRU lists and prevent crashes when
traversing backwards through the list encounters an unexpected NULL.
Entry pointers are now always cleared after lru_delete() removal to
prevent stale pointer issues in non-debug builds.
Fixes: https://github.com/389ds/389-ds-base/issues/7108
Reviewed by: @progier389, @vashirov (Thanks!!)
---
ldap/servers/slapd/back-ldbm/cache.c | 48 +++++++++++++++++++++++++---
1 file changed, 43 insertions(+), 5 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/cache.c b/ldap/servers/slapd/back-ldbm/cache.c
index 2e4126134..a87f30687 100644
--- a/ldap/servers/slapd/back-ldbm/cache.c
+++ b/ldap/servers/slapd/back-ldbm/cache.c
@@ -458,11 +458,13 @@ static void
lru_delete(struct cache *cache, void *ptr)
{
struct backcommon *e;
+
if (NULL == ptr) {
LOG("=> lru_delete\n<= lru_delete (null entry)\n");
return;
}
e = (struct backcommon *)ptr;
+
#ifdef LDAP_CACHE_DEBUG_LRU
pinned_verify(cache, __LINE__);
lru_verify(cache, e, 1);
@@ -475,8 +477,9 @@ lru_delete(struct cache *cache, void *ptr)
e->ep_lrunext->ep_lruprev = e->ep_lruprev;
else
cache->c_lrutail = e->ep_lruprev;
-#ifdef LDAP_CACHE_DEBUG_LRU
+ /* Always clear pointers after removal to prevent stale pointer issues */
e->ep_lrunext = e->ep_lruprev = NULL;
+#ifdef LDAP_CACHE_DEBUG_LRU
lru_verify(cache, e, 0);
#endif
}
@@ -633,9 +636,14 @@ flush_hash(struct cache *cache, struct timespec *start_time, int32_t type)
if (entry->ep_refcnt == 0) {
entry->ep_refcnt++;
if (entry->ep_state & ENTRY_STATE_PINNED) {
+ /* Entry is in pinned list, not LRU - remove from pinned only.
+ * pinned_remove clears lru pointers and won't add to LRU since refcnt > 0.
+ */
pinned_remove(cache, laste);
+ } else {
+ /* Entry is in LRU list - remove from LRU */
+ lru_delete(cache, laste);
}
- lru_delete(cache, laste);
if (type == ENTRY_CACHE) {
entrycache_remove_int(cache, laste);
entrycache_return(cache, (struct backentry **)&laste, PR_TRUE);
@@ -679,9 +687,14 @@ flush_hash(struct cache *cache, struct timespec *start_time, int32_t type)
if (entry->ep_refcnt == 0) {
entry->ep_refcnt++;
if (entry->ep_state & ENTRY_STATE_PINNED) {
+ /* Entry is in pinned list, not LRU - remove from pinned only.
+ * pinned_remove clears lru pointers and won't add to LRU since refcnt > 0.
+ */
pinned_remove(cache, laste);
+ } else {
+ /* Entry is in LRU list - remove from LRU */
+ lru_delete(cache, laste);
}
- lru_delete(cache, laste);
entrycache_remove_int(cache, laste);
entrycache_return(cache, (struct backentry **)&laste, PR_TRUE);
} else {
@@ -772,6 +785,11 @@ entrycache_flush(struct cache *cache)
} else {
e = BACK_LRU_PREV(e, struct backentry *);
}
+ if (e == NULL) {
+ slapi_log_err(SLAPI_LOG_WARNING, "entrycache_flush",
+ "Unexpected NULL entry while flushing cache - LRU list may be corrupted\n");
+ break;
+ }
ASSERT(e->ep_refcnt == 0);
e->ep_refcnt++;
if (entrycache_remove_int(cache, e) < 0) {
@@ -1160,6 +1178,7 @@ pinned_remove(struct cache *cache, void *ptr)
{
struct backentry *e = (struct backentry *)ptr;
ASSERT(e->ep_state & ENTRY_STATE_PINNED);
+
cache->c_pinned_ctx->npinned--;
cache->c_pinned_ctx->size -= e->ep_size;
e->ep_state &= ~ENTRY_STATE_PINNED;
@@ -1172,13 +1191,23 @@ pinned_remove(struct cache *cache, void *ptr)
cache->c_pinned_ctx->head = cache->c_pinned_ctx->tail = NULL;
} else {
cache->c_pinned_ctx->head = BACK_LRU_NEXT(e, struct backentry *);
+ /* Update new head's prev pointer to NULL */
+ if (cache->c_pinned_ctx->head) {
+ cache->c_pinned_ctx->head->ep_lruprev = NULL;
+ }
}
} else if (cache->c_pinned_ctx->tail == e) {
cache->c_pinned_ctx->tail = BACK_LRU_PREV(e, struct backentry *);
+ /* Update new tail's next pointer to NULL */
+ if (cache->c_pinned_ctx->tail) {
+ cache->c_pinned_ctx->tail->ep_lrunext = NULL;
+ }
} else {
+ /* Middle of list: update both neighbors to point to each other */
BACK_LRU_PREV(e, struct backentry *)->ep_lrunext = BACK_LRU_NEXT(e, struct backcommon *);
BACK_LRU_NEXT(e, struct backentry *)->ep_lruprev = BACK_LRU_PREV(e, struct backcommon *);
}
+ /* Clear the removed entry's pointers */
e->ep_lrunext = e->ep_lruprev = NULL;
if (e->ep_refcnt == 0) {
lru_add(cache, ptr);
@@ -1245,6 +1274,7 @@ pinned_add(struct cache *cache, void *ptr)
return false;
}
/* Now it is time to insert the entry in the pinned list */
+
cache->c_pinned_ctx->npinned++;
cache->c_pinned_ctx->size += e->ep_size;
e->ep_state |= ENTRY_STATE_PINNED;
@@ -1754,7 +1784,7 @@ entrycache_add_int(struct cache *cache, struct backentry *e, int state, struct b
* 3) ep_state: 0 && state: 0
* ==> increase the refcnt
*/
- if (e->ep_refcnt == 0)
+ if (e->ep_refcnt == 0 && (e->ep_state & ENTRY_STATE_PINNED) == 0)
lru_delete(cache, (void *)e);
e->ep_refcnt++;
e->ep_state &= ~ENTRY_STATE_UNAVAILABLE;
@@ -1781,7 +1811,7 @@ entrycache_add_int(struct cache *cache, struct backentry *e, int state, struct b
} else {
if (alt) {
*alt = my_alt;
- if (e->ep_refcnt == 0 && (e->ep_state & ENTRY_STATE_PINNED) == 0)
+ if (my_alt->ep_refcnt == 0 && (my_alt->ep_state & ENTRY_STATE_PINNED) == 0)
lru_delete(cache, (void *)*alt);
(*alt)->ep_refcnt++;
LOG("the entry %s already exists. returning existing entry %s (state: 0x%x)\n",
@@ -2379,6 +2409,14 @@ dncache_flush(struct cache *cache)
} else {
dn = BACK_LRU_PREV(dn, struct backdn *);
}
+ if (dn == NULL) {
+ /* Safety check: we should normally exit via the CACHE_LRU_HEAD check.
+ * If we get here, c_lruhead may be NULL or the LRU list is corrupted.
+ */
+ slapi_log_err(SLAPI_LOG_WARNING, "dncache_flush",
+ "Unexpected NULL entry while flushing cache - LRU list may be corrupted\n");
+ break;
+ }
ASSERT(dn->ep_refcnt == 0);
dn->ep_refcnt++;
if (dncache_remove_int(cache, dn) < 0) {
--
2.52.0

View file

@ -1,58 +0,0 @@
From 510e0e9b35d94714048a06bc5067d43704f55503 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 7 Jul 2025 22:01:09 +0200
Subject: [PATCH] Issue 6848 - AddressSanitizer: leak in do_search
Bug Description:
When there's a BER decoding error and the function goes to
`free_and_return`, the `attrs` variable is not being freed because it's
only freed if `!psearch || rc != 0 || err != 0`, but `err` is still 0 at
that point.
If we reach `free_and_return` from the `ber_scanf` error path, `attrs`
was never set in the pblock with `slapi_pblock_set()`, so the
`slapi_pblock_get()` call will not retrieve the potentially partially
allocated `attrs` from the BER decoding.
Fixes: https://github.com/389ds/389-ds-base/issues/6848
Reviewed by: @tbordaz, @droideck (Thanks!)
---
ldap/servers/slapd/search.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/ldap/servers/slapd/search.c b/ldap/servers/slapd/search.c
index e9b2c3670..f9d03c090 100644
--- a/ldap/servers/slapd/search.c
+++ b/ldap/servers/slapd/search.c
@@ -235,6 +235,7 @@ do_search(Slapi_PBlock *pb)
log_search_access(pb, base, scope, fstr, "decoding error");
send_ldap_result(pb, LDAP_PROTOCOL_ERROR, NULL, NULL, 0,
NULL);
+ err = 1; /* Make sure we free everything */
goto free_and_return;
}
@@ -420,8 +421,17 @@ free_and_return:
if (!psearch || rc != 0 || err != 0) {
slapi_ch_free_string(&fstr);
slapi_filter_free(filter, 1);
- slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &attrs);
- charray_free(attrs); /* passing NULL is fine */
+
+ /* Get attrs from pblock if it was set there, otherwise use local attrs */
+ char **pblock_attrs = NULL;
+ slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &pblock_attrs);
+ if (pblock_attrs != NULL) {
+ charray_free(pblock_attrs); /* Free attrs from pblock */
+ slapi_pblock_set(pb, SLAPI_SEARCH_ATTRS, NULL);
+ } else if (attrs != NULL) {
+ /* Free attrs that were allocated but never put in pblock */
+ charray_free(attrs);
+ }
charray_free(gerattrs); /* passing NULL is fine */
/*
* Fix for defect 526719 / 553356 : Persistent search op failed.
--
2.49.0

View file

@ -0,0 +1,215 @@
From 062aa6eab12d00adffa4e46d58722f6c0e5eeac1 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Fri, 9 Jan 2026 11:39:50 +0100
Subject: [PATCH] Issue 7172 - Index ordering mismatch after upgrade (#7173)
Bug Description:
Commit daf731f55071d45eaf403a52b63d35f4e699ff28 introduced a regression.
After upgrading to a version that adds `integerOrderingMatch` matching
rule to `parentid` and `ancestorid` indexes, searches may return empty
or incorrect results.
This happens because the existing index data was created with
lexicographic ordering, but the new compare function expects integer
ordering. Index lookups fail because the compare function doesn't match
the data ordering.
The root cause is that `ldbm_instance_create_default_indexes()` calls
`attr_index_config()` unconditionally for `parentid` and `ancestorid`
indexes, which triggers `ainfo_dup()` to overwrite `ai_key_cmp_fn` on
existing indexes. This breaks indexes that were created without the
`integerOrderingMatch` matching rule.
Fix Description:
* Call `attr_index_config()` for `parentid` and `ancestorid` indexes
only if index config doesn't exist.
* Add `upgrade_check_id_index_matching_rule()` that logs an error on
server startup if `parentid` or `ancestorid` indexes are missing the
integerOrderingMatch matching rule, advising administrators to reindex.
Fixes: https://github.com/389ds/389-ds-base/issues/7172
Reviewed by: @tbordaz, @progier389, @droideck (Thanks!)
---
ldap/servers/slapd/back-ldbm/instance.c | 25 ++++--
ldap/servers/slapd/upgrade.c | 107 +++++++++++++++++++++++-
2 files changed, 123 insertions(+), 9 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
index cb002c379..71bf0f6fa 100644
--- a/ldap/servers/slapd/back-ldbm/instance.c
+++ b/ldap/servers/slapd/back-ldbm/instance.c
@@ -190,6 +190,7 @@ ldbm_instance_create_default_indexes(backend *be)
char *ancestorid_indexes_limit = NULL;
char *parentid_indexes_limit = NULL;
struct attrinfo *ai = NULL;
+ struct attrinfo *index_already_configured = NULL;
struct index_idlistsizeinfo *iter;
int cookie;
int limit;
@@ -248,10 +249,14 @@ ldbm_instance_create_default_indexes(backend *be)
ldbm_instance_config_add_index_entry(inst, e, flags);
slapi_entry_free(e);
- e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
- ldbm_instance_config_add_index_entry(inst, e, flags);
- attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
- slapi_entry_free(e);
+ ainfo_get(be, (char *)LDBM_PARENTID_STR, &ai);
+ index_already_configured = ai;
+ if (!index_already_configured) {
+ e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
+ ldbm_instance_config_add_index_entry(inst, e, flags);
+ attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
+ slapi_entry_free(e);
+ }
e = ldbm_instance_init_config_entry("objectclass", "eq", 0, 0, 0, 0, 0);
ldbm_instance_config_add_index_entry(inst, e, flags);
@@ -288,10 +293,14 @@ ldbm_instance_create_default_indexes(backend *be)
* ancestorid is special, there is actually no such attr type
* but we still want to use the attr index file APIs.
*/
- e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
- ldbm_instance_config_add_index_entry(inst, e, flags);
- attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
- slapi_entry_free(e);
+ ainfo_get(be, (char *)LDBM_ANCESTORID_STR, &ai);
+ index_already_configured = ai;
+ if (!index_already_configured) {
+ e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
+ ldbm_instance_config_add_index_entry(inst, e, flags);
+ attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
+ slapi_entry_free(e);
+ }
slapi_ch_free_string(&ancestorid_indexes_limit);
slapi_ch_free_string(&parentid_indexes_limit);
diff --git a/ldap/servers/slapd/upgrade.c b/ldap/servers/slapd/upgrade.c
index 858392564..b02e37ed6 100644
--- a/ldap/servers/slapd/upgrade.c
+++ b/ldap/servers/slapd/upgrade.c
@@ -330,6 +330,107 @@ upgrade_remove_subtree_rename(void)
return UPGRADE_SUCCESS;
}
+/*
+ * Check if parentid/ancestorid indexes are missing the integerOrderingMatch
+ * matching rule.
+ *
+ * This function logs a warning if we detect this condition, advising
+ * the administrator to reindex the affected attributes.
+ */
+static upgrade_status
+upgrade_check_id_index_matching_rule(void)
+{
+ struct slapi_pblock *pb = slapi_pblock_new();
+ Slapi_Entry **backends = NULL;
+ const char *be_base_dn = "cn=ldbm database,cn=plugins,cn=config";
+ const char *be_filter = "(objectclass=nsBackendInstance)";
+ const char *attrs_to_check[] = {"parentid", "ancestorid", NULL};
+ upgrade_status uresult = UPGRADE_SUCCESS;
+
+ /* Search for all backend instances */
+ slapi_search_internal_set_pb(
+ pb, be_base_dn,
+ LDAP_SCOPE_ONELEVEL,
+ be_filter, NULL, 0, NULL, NULL,
+ plugin_get_default_component_id(), 0);
+ slapi_search_internal_pb(pb);
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_SEARCH_ENTRIES, &backends);
+
+ if (backends) {
+ for (size_t be_idx = 0; backends[be_idx] != NULL; be_idx++) {
+ const char *be_name = slapi_entry_attr_get_ref(backends[be_idx], "cn");
+ if (!be_name) {
+ continue;
+ }
+
+ /* Check each attribute that should have integerOrderingMatch */
+ for (size_t attr_idx = 0; attrs_to_check[attr_idx] != NULL; attr_idx++) {
+ const char *attr_name = attrs_to_check[attr_idx];
+ struct slapi_pblock *idx_pb = slapi_pblock_new();
+ Slapi_Entry **idx_entries = NULL;
+ char *idx_dn = slapi_create_dn_string("cn=%s,cn=index,cn=%s,%s",
+ attr_name, be_name, be_base_dn);
+ char *idx_filter = "(objectclass=nsIndex)";
+ PRBool has_matching_rule = PR_FALSE;
+
+ if (!idx_dn) {
+ slapi_pblock_destroy(idx_pb);
+ continue;
+ }
+
+ slapi_search_internal_set_pb(
+ idx_pb, idx_dn,
+ LDAP_SCOPE_BASE,
+ idx_filter, NULL, 0, NULL, NULL,
+ plugin_get_default_component_id(), 0);
+ slapi_search_internal_pb(idx_pb);
+ slapi_pblock_get(idx_pb, SLAPI_PLUGIN_INTOP_SEARCH_ENTRIES, &idx_entries);
+
+ if (idx_entries && idx_entries[0]) {
+ /* Index exists, check if it has integerOrderingMatch */
+ Slapi_Attr *mr_attr = NULL;
+ if (slapi_entry_attr_find(idx_entries[0], "nsMatchingRule", &mr_attr) == 0) {
+ Slapi_Value *sval = NULL;
+ int idx;
+ for (idx = slapi_attr_first_value(mr_attr, &sval);
+ idx != -1;
+ idx = slapi_attr_next_value(mr_attr, idx, &sval)) {
+ const struct berval *bval = slapi_value_get_berval(sval);
+ if (bval && bval->bv_val &&
+ strcasecmp(bval->bv_val, "integerOrderingMatch") == 0) {
+ has_matching_rule = PR_TRUE;
+ break;
+ }
+ }
+ }
+
+ if (!has_matching_rule) {
+ /* Index exists but doesn't have integerOrderingMatch, log a warning */
+ slapi_log_err(SLAPI_LOG_ERR, "upgrade_check_id_index_matching_rule",
+ "Index '%s' in backend '%s' is missing 'nsMatchingRule: integerOrderingMatch'. "
+ "Incorrectly configured system indexes can lead to poor search performance, replication issues, and other operational problems. "
+ "To fix this, add the matching rule and reindex: "
+ "dsconf <instance> backend index set --add-mr integerOrderingMatch --attr %s %s && "
+ "dsconf <instance> backend index reindex --attr %s %s. "
+ "WARNING: Reindexing can be resource-intensive and may impact server performance on a live system. "
+ "Consider scheduling reindexing during maintenance windows or periods of low activity.\n",
+ attr_name, be_name, attr_name, be_name, attr_name, be_name);
+ }
+ }
+
+ slapi_ch_free_string(&idx_dn);
+ slapi_free_search_results_internal(idx_pb);
+ slapi_pblock_destroy(idx_pb);
+ }
+ }
+ }
+
+ slapi_free_search_results_internal(pb);
+ slapi_pblock_destroy(pb);
+
+ return uresult;
+}
+
/*
* Upgrade the base config of the PAM PTA plugin.
*
@@ -547,7 +648,11 @@ upgrade_server(void)
if (upgrade_pam_pta_default_config() != UPGRADE_SUCCESS) {
return UPGRADE_FAILURE;
}
-
+
+ if (upgrade_check_id_index_matching_rule() != UPGRADE_SUCCESS) {
+ return UPGRADE_FAILURE;
+ }
+
return UPGRADE_SUCCESS;
}
--
2.52.0

View file

@ -1,58 +0,0 @@
From 7b3cd3147a8d3c41327768689962730d8fa28797 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Fri, 11 Jul 2025 12:32:38 +0200
Subject: [PATCH] Issue 6865 - AddressSanitizer: leak in
agmt_update_init_status
Bug Description:
We allocate an array of `LDAPMod *` pointers, but never free it:
```
=================================================================
==2748356==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7f05e8cb4a07 in __interceptor_malloc (/lib64/libasan.so.6+0xb4a07)
#1 0x7f05e85c0138 in slapi_ch_malloc (/usr/lib64/dirsrv/libslapd.so.0+0x1c0138)
#2 0x7f05e109e481 in agmt_update_init_status ldap/servers/plugins/replication/repl5_agmt.c:2583
#3 0x7f05e10a0aa5 in agmtlist_shutdown ldap/servers/plugins/replication/repl5_agmtlist.c:789
#4 0x7f05e10ab6bc in multisupplier_stop ldap/servers/plugins/replication/repl5_init.c:844
#5 0x7f05e10ab6bc in multisupplier_stop ldap/servers/plugins/replication/repl5_init.c:837
#6 0x7f05e862507d in plugin_call_func ldap/servers/slapd/plugin.c:2001
#7 0x7f05e8625be1 in plugin_call_one ldap/servers/slapd/plugin.c:1950
#8 0x7f05e8625be1 in plugin_dependency_closeall ldap/servers/slapd/plugin.c:1844
#9 0x55e1a7ff9815 in slapd_daemon ldap/servers/slapd/daemon.c:1275
#10 0x55e1a7fd36ef in main (/usr/sbin/ns-slapd+0x3e6ef)
#11 0x7f05e80295cf in __libc_start_call_main (/lib64/libc.so.6+0x295cf)
#12 0x7f05e802967f in __libc_start_main_alias_2 (/lib64/libc.so.6+0x2967f)
#13 0x55e1a7fd74a4 in _start (/usr/sbin/ns-slapd+0x424a4)
SUMMARY: AddressSanitizer: 24 byte(s) leaked in 1 allocation(s).
```
Fix Description:
Ensure `mods` is freed in the cleanup code.
Fixes: https://github.com/389ds/389-ds-base/issues/6865
Relates: https://github.com/389ds/389-ds-base/issues/6470
Reviewed by: @mreynolds389 (Thanks!)
---
ldap/servers/plugins/replication/repl5_agmt.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/ldap/servers/plugins/replication/repl5_agmt.c b/ldap/servers/plugins/replication/repl5_agmt.c
index c818c5857..0a81167b7 100644
--- a/ldap/servers/plugins/replication/repl5_agmt.c
+++ b/ldap/servers/plugins/replication/repl5_agmt.c
@@ -2743,6 +2743,7 @@ agmt_update_init_status(Repl_Agmt *ra)
} else {
PR_Unlock(ra->lock);
}
+ slapi_ch_free((void **)&mods);
slapi_mod_done(&smod_start_time);
slapi_mod_done(&smod_end_time);
slapi_mod_done(&smod_status);
--
2.49.0

View file

@ -0,0 +1,67 @@
From 6bce8f6e8c985289c4ac1a4f051c291283c0a1ec Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 12 Jan 2026 10:58:02 +0100
Subject: [PATCH 9/9] Issue 7172 - (2nd) Index ordering mismatch after upgrade
(#7180)
Commit 742c12e0247ab64e87da000a4de2f3e5c99044ab introduced a regression
where the check to skip creating parentid/ancestorid indexes if they
already exist was incorrect.
The `ainfo_get()` function falls back to returning
LDBM_PSEUDO_ATTR_DEFAULT attrinfo when the requested attribute is not
found.
Since LDBM_PSEUDO_ATTR_DEFAULT is created before the ancestorid check,
`ainfo_get()` returns LDBM_PSEUDO_ATTR_DEFAULT instead of NULL, causing
the ancestorid index creation to be skipped entirely.
When operations later try to use the ancestorid index, they fall back to
LDBM_PSEUDO_ATTR_DEFAULT, and attempting to open the .default dbi
mid-transaction fails with MDB_NOTFOUND (-30798).
Fix Description:
Instead of just checking if `ainfo_get()` returns non-NULL, verify that
the returned attrinfo is actually for the requested attribute.
Fixes: https://github.com/389ds/389-ds-base/issues/7172
Reviewed by: @tbordaz (Thanks!)
---
ldap/servers/slapd/back-ldbm/instance.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
index 71bf0f6fa..2a6e8cbb8 100644
--- a/ldap/servers/slapd/back-ldbm/instance.c
+++ b/ldap/servers/slapd/back-ldbm/instance.c
@@ -190,7 +190,7 @@ ldbm_instance_create_default_indexes(backend *be)
char *ancestorid_indexes_limit = NULL;
char *parentid_indexes_limit = NULL;
struct attrinfo *ai = NULL;
- struct attrinfo *index_already_configured = NULL;
+ int index_already_configured = 0;
struct index_idlistsizeinfo *iter;
int cookie;
int limit;
@@ -250,7 +250,8 @@ ldbm_instance_create_default_indexes(backend *be)
slapi_entry_free(e);
ainfo_get(be, (char *)LDBM_PARENTID_STR, &ai);
- index_already_configured = ai;
+ /* Check if the attrinfo is actually for parentid, not a fallback to .default */
+ index_already_configured = (ai != NULL && strcmp(ai->ai_type, LDBM_PARENTID_STR) == 0);
if (!index_already_configured) {
e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
ldbm_instance_config_add_index_entry(inst, e, flags);
@@ -294,7 +295,8 @@ ldbm_instance_create_default_indexes(backend *be)
* but we still want to use the attr index file APIs.
*/
ainfo_get(be, (char *)LDBM_ANCESTORID_STR, &ai);
- index_already_configured = ai;
+ /* Check if the attrinfo is actually for ancestorid, not a fallback to .default */
+ index_already_configured = (ai != NULL && strcmp(ai->ai_type, LDBM_ANCESTORID_STR) == 0);
if (!index_already_configured) {
e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
ldbm_instance_config_add_index_entry(inst, e, flags);
--
2.52.0

View file

@ -1,55 +0,0 @@
From 81af69f415ffdf48861de00ba9a60614c0a02a87 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Fri, 11 Jul 2025 13:49:25 -0400
Subject: [PATCH] Issue 6868 - UI - schema attribute table expansion break
after moving to a new page
Description:
Used the wrong formula to select the expanded row for Attributes
Relates: https://github.com/389ds/389-ds-base/issues/6868
Reviewed by: spichugi(Thanks!)
---
src/cockpit/389-console/src/lib/database/databaseConfig.jsx | 1 -
src/cockpit/389-console/src/lib/schema/schemaTables.jsx | 4 ++--
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/src/cockpit/389-console/src/lib/database/databaseConfig.jsx b/src/cockpit/389-console/src/lib/database/databaseConfig.jsx
index adb8227d7..7a1ce3bc2 100644
--- a/src/cockpit/389-console/src/lib/database/databaseConfig.jsx
+++ b/src/cockpit/389-console/src/lib/database/databaseConfig.jsx
@@ -8,7 +8,6 @@ import {
Form,
Grid,
GridItem,
- Hr,
NumberInput,
Spinner,
Switch,
diff --git a/src/cockpit/389-console/src/lib/schema/schemaTables.jsx b/src/cockpit/389-console/src/lib/schema/schemaTables.jsx
index 609d4af15..446931ac2 100644
--- a/src/cockpit/389-console/src/lib/schema/schemaTables.jsx
+++ b/src/cockpit/389-console/src/lib/schema/schemaTables.jsx
@@ -465,7 +465,7 @@ class AttributesTable extends React.Component {
handleCollapse(event, rowKey, isOpen) {
const { rows, perPage, page } = this.state;
- const index = (perPage * (page - 1) * 2) + rowKey; // Adjust for page set
+ const index = (perPage * (page - 1)) + rowKey; // Adjust for page set
rows[index].isOpen = isOpen;
this.setState({
rows
@@ -525,7 +525,7 @@ class AttributesTable extends React.Component {
];
render() {
- const { perPage, page, sortBy, rows, noRows, columns } = this.state;
+ const { perPage, page, sortBy, rows, columns } = this.state;
const startIdx = (perPage * page) - perPage;
const tableRows = rows.slice(startIdx, startIdx + perPage);
--
2.49.0

View file

@ -1,169 +0,0 @@
From e4bd0eb2a4ad612efbf7824da022dd5403c71684 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 9 Jul 2025 14:18:50 -0400
Subject: [PATCH] Issue 6859 - str2filter is not fully applying matching rules
Description:
When we have an extended filter, one with a MR applied, it is ignored during
internal searches:
"(cn:CaseExactMatch:=Value)"
For internal searches we use str2filter() and it doesn't fully apply extended
search filter matching rules
Also needed to update attr uniqueness plugin to apply this change for mod
operations (previously only Adds were correctly handling these attribute
filters)
Relates: https://github.com/389ds/389-ds-base/issues/6857
Relates: https://github.com/389ds/389-ds-base/issues/6859
Reviewed by: spichugi & tbordaz(Thanks!!)
---
.../tests/suites/plugins/attruniq_test.py | 65 ++++++++++++++++++-
ldap/servers/plugins/uiduniq/uid.c | 7 ++
ldap/servers/slapd/plugin_mr.c | 2 +-
ldap/servers/slapd/str2filter.c | 8 +++
4 files changed, 79 insertions(+), 3 deletions(-)
diff --git a/dirsrvtests/tests/suites/plugins/attruniq_test.py b/dirsrvtests/tests/suites/plugins/attruniq_test.py
index aac659c29..046952df3 100644
--- a/dirsrvtests/tests/suites/plugins/attruniq_test.py
+++ b/dirsrvtests/tests/suites/plugins/attruniq_test.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2021 Red Hat, Inc.
+# Copyright (C) 2025 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -324,4 +324,65 @@ def test_exclude_subtrees(topology_st):
cont2.delete()
cont3.delete()
attruniq.disable()
- attruniq.delete()
\ No newline at end of file
+ attruniq.delete()
+
+
+def test_matchingrule_attr(topology_st):
+ """ Test list extension MR attribute. Check for "cn" using CES (versus it
+ being defined as CIS)
+
+ :id: 5cde4342-6fa3-4225-b23d-0af918981075
+ :setup: Standalone instance
+ :steps:
+ 1. Setup and enable attribute uniqueness plugin to use CN attribute
+ with a matching rule of CaseExactMatch.
+ 2. Add user with CN value is lowercase
+ 3. Add second user with same lowercase CN which should be rejected
+ 4. Add second user with same CN value but with mixed case
+ 5. Modify second user replacing CN value to lc which should be rejected
+
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ 5. Success
+ """
+
+ inst = topology_st.standalone
+
+ attruniq = AttributeUniquenessPlugin(inst,
+ dn="cn=attribute uniqueness,cn=plugins,cn=config")
+ attruniq.add_unique_attribute('cn:CaseExactMatch:')
+ attruniq.enable_all_subtrees()
+ attruniq.enable()
+ inst.restart()
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ users.create(properties={'cn': "common_name",
+ 'uid': "uid_name",
+ 'sn': "uid_name",
+ 'uidNumber': '1',
+ 'gidNumber': '11',
+ 'homeDirectory': '/home/uid_name'})
+
+ log.info('Add entry with the exact CN value which should be rejected')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ users.create(properties={'cn': "common_name",
+ 'uid': "uid_name2",
+ 'sn': "uid_name2",
+ 'uidNumber': '11',
+ 'gidNumber': '111',
+ 'homeDirectory': '/home/uid_name2'})
+
+ log.info('Add entry with the mixed case CN value which should be allowed')
+ user = users.create(properties={'cn': "Common_Name",
+ 'uid': "uid_name2",
+ 'sn': "uid_name2",
+ 'uidNumber': '11',
+ 'gidNumber': '111',
+ 'homeDirectory': '/home/uid_name2'})
+
+ log.info('Mod entry with exact case CN value which should be rejected')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user.replace('cn', 'common_name')
diff --git a/ldap/servers/plugins/uiduniq/uid.c b/ldap/servers/plugins/uiduniq/uid.c
index 887e79d78..fdb1404a0 100644
--- a/ldap/servers/plugins/uiduniq/uid.c
+++ b/ldap/servers/plugins/uiduniq/uid.c
@@ -1178,6 +1178,10 @@ preop_modify(Slapi_PBlock *pb)
for (; mods && *mods; mods++) {
mod = *mods;
for (i = 0; attrNames && attrNames[i]; i++) {
+ char *attr_match = strchr(attrNames[i], ':');
+ if (attr_match != NULL) {
+ attr_match[0] = '\0';
+ }
if ((slapi_attr_type_cmp(mod->mod_type, attrNames[i], 1) == 0) && /* mod contains target attr */
(mod->mod_op & LDAP_MOD_BVALUES) && /* mod is bval encoded (not string val) */
(mod->mod_bvalues && mod->mod_bvalues[0]) && /* mod actually contains some values */
@@ -1186,6 +1190,9 @@ preop_modify(Slapi_PBlock *pb)
{
addMod(&checkmods, &checkmodsCapacity, &modcount, mod);
}
+ if (attr_match != NULL) {
+ attr_match[0] = ':';
+ }
}
}
if (modcount == 0) {
diff --git a/ldap/servers/slapd/plugin_mr.c b/ldap/servers/slapd/plugin_mr.c
index 9809a4374..757355dbc 100644
--- a/ldap/servers/slapd/plugin_mr.c
+++ b/ldap/servers/slapd/plugin_mr.c
@@ -625,7 +625,7 @@ attempt_mr_filter_create(mr_filter_t *f, struct slapdplugin *mrp, Slapi_PBlock *
int rc;
int32_t (*mrf_create)(Slapi_PBlock *) = NULL;
f->mrf_match = NULL;
- pblock_init(pb);
+ slapi_pblock_init(pb);
if (!(rc = slapi_pblock_set(pb, SLAPI_PLUGIN, mrp)) &&
!(rc = slapi_pblock_get(pb, SLAPI_PLUGIN_MR_FILTER_CREATE_FN, &mrf_create)) &&
mrf_create != NULL &&
diff --git a/ldap/servers/slapd/str2filter.c b/ldap/servers/slapd/str2filter.c
index 9fdc500f7..5620b7439 100644
--- a/ldap/servers/slapd/str2filter.c
+++ b/ldap/servers/slapd/str2filter.c
@@ -344,6 +344,14 @@ str2simple(char *str, int unescape_filter)
return NULL; /* error */
} else {
f->f_choice = LDAP_FILTER_EXTENDED;
+ if (f->f_mr_oid) {
+ /* apply the MR indexers */
+ rc = plugin_mr_filter_create(&f->f_mr);
+ if (rc) {
+ slapi_filter_free(f, 1);
+ return NULL; /* error */
+ }
+ }
}
} else if (str_find_star(value) == NULL) {
f->f_choice = LDAP_FILTER_EQUALITY;
--
2.49.0

View file

@ -1,163 +0,0 @@
From 48e7696fbebc14220b4b9a831c4a170003586152 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Tue, 15 Jul 2025 17:56:18 -0400
Subject: [PATCH] Issue 6872 - compressed log rotation creates files with world
readable permission
Description:
When compressing a log file, first create the empty file using open()
so we can set the correct permissions right from the start. gzopen()
always uses permission 644 and that is not safe. So after creating it
with open(), with the correct permissions, then pass the FD to gzdopen()
and write the compressed content.
relates: https://github.com/389ds/389-ds-base/issues/6872
Reviewed by: progier(Thanks!)
---
.../logging/logging_compression_test.py | 15 ++++++++--
ldap/servers/slapd/log.c | 28 +++++++++++++------
ldap/servers/slapd/schema.c | 2 +-
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/logging_compression_test.py b/dirsrvtests/tests/suites/logging/logging_compression_test.py
index e30874cc0..3a987d62c 100644
--- a/dirsrvtests/tests/suites/logging/logging_compression_test.py
+++ b/dirsrvtests/tests/suites/logging/logging_compression_test.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2022 Red Hat, Inc.
+# Copyright (C) 2025 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -22,12 +22,21 @@ log = logging.getLogger(__name__)
pytestmark = pytest.mark.tier1
+
def log_rotated_count(log_type, log_dir, check_compressed=False):
- # Check if the log was rotated
+ """
+ Check if the log was rotated and has the correct permissions
+ """
log_file = f'{log_dir}/{log_type}.2*'
if check_compressed:
log_file += ".gz"
- return len(glob.glob(log_file))
+ log_files = glob.glob(log_file)
+ for logf in log_files:
+ # Check permissions
+ st = os.stat(logf)
+ assert oct(st.st_mode) == '0o100600' # 0600
+
+ return len(log_files)
def update_and_sleep(inst, suffix, sleep=True):
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index 06dae4d0b..eab837166 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -174,17 +174,28 @@ get_syslog_loglevel(int loglevel)
}
static int
-compress_log_file(char *log_name)
+compress_log_file(char *log_name, int32_t mode)
{
char gzip_log[BUFSIZ] = {0};
char buf[LOG_CHUNK] = {0};
size_t bytes_read = 0;
gzFile outfile = NULL;
FILE *source = NULL;
+ int fd = 0;
PR_snprintf(gzip_log, sizeof(gzip_log), "%s.gz", log_name);
- if ((outfile = gzopen(gzip_log,"wb")) == NULL) {
- /* Failed to open new gzip file */
+
+ /*
+ * Try to open the file as we may have an incorrect path. We also need to
+ * set the permissions using open() as gzopen() creates the file with
+ * 644 permissions (world readable - bad). So we create an empty file with
+ * the correct permissions, then we pass the FD to gzdopen() to write the
+ * compressed content.
+ */
+ if ((fd = open(gzip_log, O_WRONLY|O_CREAT|O_TRUNC, mode)) >= 0) {
+ /* FIle successfully created, now pass the FD to gzdopen() */
+ outfile = gzdopen(fd, "ab");
+ } else {
return -1;
}
@@ -193,6 +204,7 @@ compress_log_file(char *log_name)
gzclose(outfile);
return -1;
}
+
bytes_read = fread(buf, 1, LOG_CHUNK, source);
while (bytes_read > 0) {
int bytes_written = gzwrite(outfile, buf, bytes_read);
@@ -3402,7 +3414,7 @@ log__open_accesslogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_access_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_access_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
"failed to compress rotated access log (%s)\n",
newfile);
@@ -3570,7 +3582,7 @@ log__open_securitylogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_security_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_security_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_securitylogfile",
"failed to compress rotated security audit log (%s)\n",
newfile);
@@ -6288,7 +6300,7 @@ log__open_errorlogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_error_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_error_mode) != 0) {
PR_snprintf(buffer, sizeof(buffer), "Failed to compress errors log file (%s)\n", newfile);
log__error_emergency(buffer, 1, 1);
} else {
@@ -6476,7 +6488,7 @@ log__open_auditlogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_audit_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_audit_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
"failed to compress rotated audit log (%s)\n",
newfile);
@@ -6641,7 +6653,7 @@ log__open_auditfaillogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_auditfail_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_auditfail_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
"failed to compress rotated auditfail log (%s)\n",
newfile);
diff --git a/ldap/servers/slapd/schema.c b/ldap/servers/slapd/schema.c
index a8e6b1210..9ef4ee4bf 100644
--- a/ldap/servers/slapd/schema.c
+++ b/ldap/servers/slapd/schema.c
@@ -903,7 +903,7 @@ oc_check_allowed_sv(Slapi_PBlock *pb, Slapi_Entry *e, const char *type, struct o
if (pb) {
PR_snprintf(errtext, sizeof(errtext),
- "attribute \"%s\" not allowed\n",
+ "attribute \"%s\" not allowed",
escape_string(type, ebuf));
slapi_pblock_set(pb, SLAPI_PB_RESULT_TEXT, errtext);
}
--
2.49.0

View file

@ -1,590 +0,0 @@
From a8fe12fcfbe0f81935972c3eddae638a281551d1 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 16 Jul 2025 20:54:48 -0400
Subject: [PATCH] Issue 6888 - Missing access JSON logging for TLS/Client auth
Description:
TLS/Client auth logging was not converted to JSON (auth.c got missed)
Relates: https://github.com/389ds/389-ds-base/issues/6888
Reviewed by: spichugi(Thanks!)
---
.../logging/access_json_logging_test.py | 96 ++++++++-
ldap/servers/slapd/accesslog.c | 114 +++++++++++
ldap/servers/slapd/auth.c | 182 +++++++++++++-----
ldap/servers/slapd/log.c | 2 +
ldap/servers/slapd/slapi-private.h | 10 +
5 files changed, 353 insertions(+), 51 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/access_json_logging_test.py b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
index ae91dc487..f0dc861a7 100644
--- a/dirsrvtests/tests/suites/logging/access_json_logging_test.py
+++ b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
@@ -19,6 +19,8 @@ from lib389.idm.user import UserAccounts
from lib389.dirsrv_log import DirsrvAccessJSONLog
from lib389.index import VLVSearch, VLVIndex
from lib389.tasks import Tasks
+from lib389.config import CertmapLegacy
+from lib389.nss_ssl import NssSsl
from ldap.controls.vlv import VLVRequestControl
from ldap.controls.sss import SSSRequestControl
from ldap.controls import SimplePagedResultsControl
@@ -67,11 +69,11 @@ def get_log_event(inst, op, key=None, val=None, key2=None, val2=None):
if val == str(event[key]).lower() and \
val2 == str(event[key2]).lower():
return event
-
- elif key is not None and key in event:
- val = str(val).lower()
- if val == str(event[key]).lower():
- return event
+ elif key is not None:
+ if key in event:
+ val = str(val).lower()
+ if val == str(event[key]).lower():
+ return event
else:
return event
@@ -163,6 +165,7 @@ def test_access_json_format(topo_m2, setup_test):
14. Test PAGED SEARCH is logged correctly
15. Test PERSISTENT SEARCH is logged correctly
16. Test EXTENDED OP
+ 17. Test TLS_INFO is logged correctly
:expectedresults:
1. Success
2. Success
@@ -180,6 +183,7 @@ def test_access_json_format(topo_m2, setup_test):
14. Success
15. Success
16. Success
+ 17. Success
"""
inst = topo_m2.ms["supplier1"]
@@ -560,6 +564,88 @@ def test_access_json_format(topo_m2, setup_test):
assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID"
assert event['name'] == "replication-multisupplier-extop"
+ #
+ # TLS INFO/TLS CLIENT INFO
+ #
+ RDN_TEST_USER = 'testuser'
+ RDN_TEST_USER_WRONG = 'testuser_wrong'
+ inst.enable_tls()
+ inst.restart()
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ user = users.create(properties={
+ 'uid': RDN_TEST_USER,
+ 'cn': RDN_TEST_USER,
+ 'sn': RDN_TEST_USER,
+ 'uidNumber': '1000',
+ 'gidNumber': '2000',
+ 'homeDirectory': f'/home/{RDN_TEST_USER}'
+ })
+
+ ssca_dir = inst.get_ssca_dir()
+ ssca = NssSsl(dbpath=ssca_dir)
+ ssca.create_rsa_user(RDN_TEST_USER)
+ ssca.create_rsa_user(RDN_TEST_USER_WRONG)
+
+ # Get the details of where the key and crt are.
+ tls_locs = ssca.get_rsa_user(RDN_TEST_USER)
+ tls_locs_wrong = ssca.get_rsa_user(RDN_TEST_USER_WRONG)
+
+ user.enroll_certificate(tls_locs['crt_der_path'])
+
+ # Turn on the certmap.
+ cm = CertmapLegacy(inst)
+ certmaps = cm.list()
+ certmaps['default']['DNComps'] = ''
+ certmaps['default']['FilterComps'] = ['cn']
+ certmaps['default']['VerifyCert'] = 'off'
+ cm.set(certmaps)
+
+ # Check that EXTERNAL is listed in supported mechns.
+ assert (inst.rootdse.supports_sasl_external())
+
+ # Restart to allow certmaps to be re-read: Note, we CAN NOT use post_open
+ # here, it breaks on auth. see lib389/__init__.py
+ inst.restart(post_open=False)
+
+ # Attempt a bind with TLS external
+ inst.open(saslmethod='EXTERNAL', connOnly=True, certdir=ssca_dir,
+ userkey=tls_locs['key'], usercert=tls_locs['crt'])
+ inst.restart()
+
+ event = get_log_event(inst, "TLS_INFO")
+ assert event is not None
+ assert 'tls_version' in event
+ assert 'keysize' in event
+ assert 'cipher' in event
+
+ event = get_log_event(inst, "TLS_CLIENT_INFO",
+ "subject",
+ "CN=testuser,O=testing,L=389ds,ST=Queensland,C=AU")
+ assert event is not None
+ assert 'tls_version' in event
+ assert 'keysize' in event
+ assert 'issuer' in event
+
+ event = get_log_event(inst, "TLS_CLIENT_INFO",
+ "client_dn",
+ "uid=testuser,ou=People,dc=example,dc=com")
+ assert event is not None
+ assert 'tls_version' in event
+ assert event['msg'] == "client bound"
+
+ # Check for failed certmap error
+ with pytest.raises(ldap.INVALID_CREDENTIALS):
+ inst.open(saslmethod='EXTERNAL', connOnly=True, certdir=ssca_dir,
+ userkey=tls_locs_wrong['key'],
+ usercert=tls_locs_wrong['crt'])
+
+ event = get_log_event(inst, "TLS_CLIENT_INFO", "err", -185)
+ assert event is not None
+ assert 'tls_version' in event
+ assert event['msg'] == "failed to map client certificate to LDAP DN"
+ assert event['err_msg'] == "Certificate couldn't be mapped to an ldap entry"
+
if __name__ == '__main__':
# Run isolated
diff --git a/ldap/servers/slapd/accesslog.c b/ldap/servers/slapd/accesslog.c
index 68022fe38..072ace203 100644
--- a/ldap/servers/slapd/accesslog.c
+++ b/ldap/servers/slapd/accesslog.c
@@ -1147,3 +1147,117 @@ slapd_log_access_sort(slapd_log_pblock *logpb)
return rc;
}
+
+/*
+ * TLS connection
+ *
+ * int32_t log_format
+ * time_t conn_time
+ * uint64_t conn_id
+ * const char *msg
+ * const char *tls_version
+ * int32_t keysize
+ * const char *cipher
+ * int32_t err
+ * const char *err_str
+ */
+int32_t
+slapd_log_access_tls(slapd_log_pblock *logpb)
+{
+ int32_t rc = 0;
+ char *msg = NULL;
+ json_object *json_obj = NULL;
+
+ if ((json_obj = build_base_obj(logpb, "TLS_INFO")) == NULL) {
+ return rc;
+ }
+
+ if (logpb->msg) {
+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg));
+ }
+ if (logpb->tls_version) {
+ json_object_object_add(json_obj, "tls_version", json_obj_add_str(logpb->tls_version));
+ }
+ if (logpb->cipher) {
+ json_object_object_add(json_obj, "cipher", json_obj_add_str(logpb->cipher));
+ }
+ if (logpb->keysize) {
+ json_object_object_add(json_obj, "keysize", json_object_new_int(logpb->keysize));
+ }
+ if (logpb->err_str) {
+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err));
+ json_object_object_add(json_obj, "err_msg", json_obj_add_str(logpb->err_str));
+ }
+
+ /* Convert json object to string and log it */
+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format);
+ rc = slapd_log_access_json(msg);
+
+ /* Done with JSON object - free it */
+ json_object_put(json_obj);
+
+ return rc;
+}
+
+/*
+ * TLS client auth
+ *
+ * int32_t log_format
+ * time_t conn_time
+ * uint64_t conn_id
+ * const char* tls_version
+ * const char* keysize
+ * const char* cipher
+ * const char* msg
+ * const char* subject
+ * const char* issuer
+ * int32_t err
+ * const char* err_str
+ * const char *client_dn
+ */
+int32_t
+slapd_log_access_tls_client_auth(slapd_log_pblock *logpb)
+{
+ int32_t rc = 0;
+ char *msg = NULL;
+ json_object *json_obj = NULL;
+
+ if ((json_obj = build_base_obj(logpb, "TLS_CLIENT_INFO")) == NULL) {
+ return rc;
+ }
+
+ if (logpb->tls_version) {
+ json_object_object_add(json_obj, "tls_version", json_obj_add_str(logpb->tls_version));
+ }
+ if (logpb->cipher) {
+ json_object_object_add(json_obj, "cipher", json_obj_add_str(logpb->cipher));
+ }
+ if (logpb->keysize) {
+ json_object_object_add(json_obj, "keysize", json_object_new_int(logpb->keysize));
+ }
+ if (logpb->subject) {
+ json_object_object_add(json_obj, "subject", json_obj_add_str(logpb->subject));
+ }
+ if (logpb->issuer) {
+ json_object_object_add(json_obj, "issuer", json_obj_add_str(logpb->issuer));
+ }
+ if (logpb->client_dn) {
+ json_object_object_add(json_obj, "client_dn", json_obj_add_str(logpb->client_dn));
+ }
+ if (logpb->msg) {
+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg));
+ }
+ if (logpb->err_str) {
+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err));
+ json_object_object_add(json_obj, "err_msg", json_obj_add_str(logpb->err_str));
+ }
+
+ /* Convert json object to string and log it */
+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format);
+ rc = slapd_log_access_json(msg);
+
+ /* Done with JSON object - free it */
+ json_object_put(json_obj);
+
+ return rc;
+}
diff --git a/ldap/servers/slapd/auth.c b/ldap/servers/slapd/auth.c
index e4231bf45..48e4b7129 100644
--- a/ldap/servers/slapd/auth.c
+++ b/ldap/servers/slapd/auth.c
@@ -1,6 +1,6 @@
/** BEGIN COPYRIGHT BLOCK
* Copyright (C) 2001 Sun Microsystems, Inc. Used by permission.
- * Copyright (C) 2005 Red Hat, Inc.
+ * Copyright (C) 2025 Red Hat, Inc.
* All rights reserved.
*
* License: GPL (version 3 or any later version).
@@ -363,19 +363,32 @@ handle_bad_certificate(void *clientData, PRFileDesc *prfd)
char sbuf[BUFSIZ], ibuf[BUFSIZ];
Connection *conn = (Connection *)clientData;
CERTCertificate *clientCert = slapd_ssl_peerCertificate(prfd);
-
PRErrorCode errorCode = PR_GetError();
char *subject = subject_of(clientCert);
char *issuer = issuer_of(clientCert);
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " " SLAPI_COMPONENT_NAME_NSPR " error %i (%s); unauthenticated client %s; issuer %s\n",
- conn->c_connid, errorCode, slapd_pr_strerror(errorCode),
- subject ? escape_string(subject, sbuf) : "NULL",
- issuer ? escape_string(issuer, ibuf) : "NULL");
+ int32_t log_format = config_get_accesslog_log_format();
+ slapd_log_pblock logpb = {0};
+
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "unauthenticated client";
+ logpb.subject = subject ? escape_string(subject, sbuf) : "NULL";
+ logpb.issuer = issuer ? escape_string(issuer, ibuf) : "NULL";
+ logpb.err = errorCode;
+ logpb.err_str = slapd_pr_strerror(errorCode);
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " " SLAPI_COMPONENT_NAME_NSPR " error %i (%s); unauthenticated client %s; issuer %s\n",
+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode),
+ subject ? escape_string(subject, sbuf) : "NULL",
+ issuer ? escape_string(issuer, ibuf) : "NULL");
+ }
if (issuer)
- free(issuer);
+ slapi_ch_free_string(&issuer);
if (subject)
- free(subject);
+ slapi_ch_free_string(&subject);
if (clientCert)
CERT_DestroyCertificate(clientCert);
return -1; /* non-zero means reject this certificate */
@@ -394,7 +407,8 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
{
Connection *conn = (Connection *)clientData;
CERTCertificate *clientCert = slapd_ssl_peerCertificate(prfd);
-
+ int32_t log_format = config_get_accesslog_log_format();
+ slapd_log_pblock logpb = {0};
char *clientDN = NULL;
int keySize = 0;
char *cipher = NULL;
@@ -403,19 +417,39 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
SSLCipherSuiteInfo cipherInfo;
char *subject = NULL;
char sslversion[64];
+ int err = 0;
if ((slapd_ssl_getChannelInfo(prfd, &channelInfo, sizeof(channelInfo))) != SECSuccess) {
PRErrorCode errorCode = PR_GetError();
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
- conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.err = errorCode;
+ logpb.err_str = slapd_pr_strerror(errorCode);
+ logpb.msg = "SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR;
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ }
goto done;
}
+
if ((slapd_ssl_getCipherSuiteInfo(channelInfo.cipherSuite, &cipherInfo, sizeof(cipherInfo))) != SECSuccess) {
PRErrorCode errorCode = PR_GetError();
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
- conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.err = errorCode;
+ logpb.err_str = slapd_pr_strerror(errorCode);
+ logpb.msg = "SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR;
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ }
goto done;
}
@@ -434,47 +468,84 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
if (config_get_SSLclientAuth() == SLAPD_SSLCLIENTAUTH_OFF) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion, sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
- conn->c_connid,
- sslversion, keySize, cipher ? cipher : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
+ conn->c_connid,
+ sslversion, keySize, cipher ? cipher : "NULL");
+ }
goto done;
}
if (clientCert == NULL) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion, sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
- conn->c_connid,
- sslversion, keySize, cipher ? cipher : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
+ conn->c_connid,
+ sslversion, keySize, cipher ? cipher : "NULL");
+ }
} else {
subject = subject_of(clientCert);
if (!subject) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s %i-bit %s; missing subject\n",
- conn->c_connid,
- sslversion, keySize, cipher ? cipher : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "missing subject";
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s %i-bit %s; missing subject\n",
+ conn->c_connid,
+ sslversion, keySize, cipher ? cipher : "NULL");
+ }
goto done;
- }
- {
+ } else {
char *issuer = issuer_of(clientCert);
char sbuf[BUFSIZ], ibuf[BUFSIZ];
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s %i-bit %s; client %s; issuer %s\n",
- conn->c_connid,
- sslversion, keySize,
- cipher ? cipher : "NULL",
- escape_string(subject, sbuf),
- issuer ? escape_string(issuer, ibuf) : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ logpb.subject = escape_string(subject, sbuf);
+ logpb.issuer = issuer ? escape_string(issuer, ibuf) : "NULL";
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s %i-bit %s; client %s; issuer %s\n",
+ conn->c_connid,
+ sslversion, keySize,
+ cipher ? cipher : "NULL",
+ escape_string(subject, sbuf),
+ issuer ? escape_string(issuer, ibuf) : "NULL");
+ }
if (issuer)
- free(issuer);
+ slapi_ch_free_string(&issuer);
}
slapi_dn_normalize(subject);
{
LDAPMessage *chain = NULL;
char *basedn = config_get_basedn();
- int err;
err = ldapu_cert_to_ldap_entry(clientCert, internal_ld, basedn ? basedn : "" /*baseDN*/, &chain);
if (err == LDAPU_SUCCESS && chain) {
@@ -505,18 +576,37 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
slapi_sdn_free(&sdn);
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s client bound as %s\n",
- conn->c_connid,
- sslversion, clientDN);
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "client bound";
+ logpb.tls_version = sslversion;
+ logpb.client_dn = clientDN;
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s client bound as %s\n",
+ conn->c_connid,
+ sslversion, clientDN);
+ }
} else if (clientCert != NULL) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s failed to map client "
- "certificate to LDAP DN (%s)\n",
- conn->c_connid,
- sslversion, extraErrorMsg);
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "failed to map client certificate to LDAP DN";
+ logpb.tls_version = sslversion;
+ logpb.err = err;
+ logpb.err_str = extraErrorMsg;
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s failed to map client "
+ "certificate to LDAP DN (%s)\n",
+ conn->c_connid,
+ sslversion, extraErrorMsg);
+ }
}
/*
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index eab837166..06792a55a 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -7270,6 +7270,8 @@ slapd_log_pblock_init(slapd_log_pblock *logpb, int32_t log_format, Slapi_PBlock
slapi_pblock_get(pb, SLAPI_CONNECTION, &conn);
}
+ memset(logpb, 0, sizeof(slapd_log_pblock));
+
logpb->loginfo = &loginfo;
logpb->level = 256; /* default log level */
logpb->log_format = log_format;
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
index 6438a81fe..da232ae2f 100644
--- a/ldap/servers/slapd/slapi-private.h
+++ b/ldap/servers/slapd/slapi-private.h
@@ -1549,6 +1549,13 @@ typedef struct slapd_log_pblock {
PRBool using_tls;
PRBool haproxied;
const char *bind_dn;
+ /* TLS */
+ const char *tls_version;
+ int32_t keysize;
+ const char *cipher;
+ const char *subject;
+ const char *issuer;
+ const char *client_dn;
/* Close connection */
const char *close_error;
const char *close_reason;
@@ -1619,6 +1626,7 @@ typedef struct slapd_log_pblock {
const char *oid;
const char *msg;
const char *name;
+ const char *err_str;
LDAPControl **request_controls;
LDAPControl **response_controls;
} slapd_log_pblock;
@@ -1645,6 +1653,8 @@ int32_t slapd_log_access_entry(slapd_log_pblock *logpb);
int32_t slapd_log_access_referral(slapd_log_pblock *logpb);
int32_t slapd_log_access_extop(slapd_log_pblock *logpb);
int32_t slapd_log_access_sort(slapd_log_pblock *logpb);
+int32_t slapd_log_access_tls(slapd_log_pblock *logpb);
+int32_t slapd_log_access_tls_client_auth(slapd_log_pblock *logpb);
#ifdef __cplusplus
}
--
2.49.0

View file

@ -1,67 +0,0 @@
From c44c45797a0e92fcdb6f0cc08f56816c7d77ffac Mon Sep 17 00:00:00 2001
From: Anuar Beisembayev <111912342+abeisemb@users.noreply.github.com>
Date: Wed, 23 Jul 2025 23:48:11 -0400
Subject: [PATCH] Issue 6772 - dsconf - Replicas with the "consumer" role allow
for viewing and modification of their changelog. (#6773)
dsconf currently allows users to set and retrieve changelogs in consumer replicas, which do not have officially supported changelogs. This can lead to undefined behavior and confusion.
This commit prints a warning message if the user tries to interact with a changelog on a consumer replica.
Resolves: https://github.com/389ds/389-ds-base/issues/6772
Reviewed by: @droideck
---
src/lib389/lib389/cli_conf/replication.py | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/src/lib389/lib389/cli_conf/replication.py b/src/lib389/lib389/cli_conf/replication.py
index 6f77f34ca..a18bf83ca 100644
--- a/src/lib389/lib389/cli_conf/replication.py
+++ b/src/lib389/lib389/cli_conf/replication.py
@@ -686,6 +686,9 @@ def set_per_backend_cl(inst, basedn, log, args):
replace_list = []
did_something = False
+ if (is_replica_role_consumer(inst, suffix)):
+ log.info("Warning: Changelogs are not supported for consumer replicas. You may run into undefined behavior.")
+
if args.encrypt:
cl.replace('nsslapd-encryptionalgorithm', 'AES')
del args.encrypt
@@ -715,6 +718,10 @@ def set_per_backend_cl(inst, basedn, log, args):
# that means there is a changelog config entry per backend (aka suffix)
def get_per_backend_cl(inst, basedn, log, args):
suffix = args.suffix
+
+ if (is_replica_role_consumer(inst, suffix)):
+ log.info("Warning: Changelogs are not supported for consumer replicas. You may run into undefined behavior.")
+
cl = Changelog(inst, suffix)
if args and args.json:
log.info(cl.get_all_attrs_json())
@@ -822,6 +829,22 @@ def del_repl_manager(inst, basedn, log, args):
log.info("Successfully deleted replication manager: " + manager_dn)
+def is_replica_role_consumer(inst, suffix):
+ """Helper function for get_per_backend_cl and set_per_backend_cl.
+ Makes sure the instance in question is not a consumer, which is a role that
+ does not support changelogs.
+ """
+ replicas = Replicas(inst)
+ try:
+ replica = replicas.get(suffix)
+ role = replica.get_role()
+ except ldap.NO_SUCH_OBJECT:
+ raise ValueError(f"Backend \"{suffix}\" is not enabled for replication")
+
+ if role == ReplicaRole.CONSUMER:
+ return True
+ else:
+ return False
#
# Agreements
--
2.49.0

View file

@ -1,360 +0,0 @@
From b5134beedc719094193331ddbff0ca75316f93ff Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Mon, 21 Jul 2025 18:07:21 -0400
Subject: [PATCH] Issue 6893 - Log user that is updated during password modify
extended operation
Description:
When a user's password is updated via an extended operation (password modify
plugin) we only log the bind DN and not what user was updated. While "internal
operation" logging will display the the user it should be logged by the default
logging level.
Add access logging using "EXT_INFO" for the old logging format, and
"EXTENDED_OP_INFO" for json logging where we display the bind dn, target
dn, and message.
Relates: https://github.com/389ds/389-ds-base/issues/6893
Reviewed by: spichugi & tbordaz(Thanks!!)
---
.../logging/access_json_logging_test.py | 98 +++++++++++++++----
ldap/servers/slapd/accesslog.c | 47 +++++++++
ldap/servers/slapd/passwd_extop.c | 69 +++++++------
ldap/servers/slapd/slapi-private.h | 1 +
4 files changed, 169 insertions(+), 46 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/access_json_logging_test.py b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
index f0dc861a7..699bd8c4d 100644
--- a/dirsrvtests/tests/suites/logging/access_json_logging_test.py
+++ b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
@@ -11,7 +11,7 @@ import os
import time
import ldap
import pytest
-from lib389._constants import DEFAULT_SUFFIX, PASSWORD, LOG_ACCESS_LEVEL
+from lib389._constants import DEFAULT_SUFFIX, PASSWORD, LOG_ACCESS_LEVEL, DN_DM
from lib389.properties import TASK_WAIT
from lib389.topologies import topology_m2 as topo_m2
from lib389.idm.group import Groups
@@ -548,22 +548,6 @@ def test_access_json_format(topo_m2, setup_test):
"2.16.840.1.113730.3.4.3",
"LDAP_CONTROL_PERSISTENTSEARCH")
- #
- # Extended op
- #
- log.info("Test EXTENDED_OP")
- event = get_log_event(inst, "EXTENDED_OP", "oid",
- "2.16.840.1.113730.3.5.12")
- assert event is not None
- assert event['oid_name'] == "REPL_START_NSDS90_REPLICATION_REQUEST_OID"
- assert event['name'] == "replication-multisupplier-extop"
-
- event = get_log_event(inst, "EXTENDED_OP", "oid",
- "2.16.840.1.113730.3.5.5")
- assert event is not None
- assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID"
- assert event['name'] == "replication-multisupplier-extop"
-
#
# TLS INFO/TLS CLIENT INFO
#
@@ -579,7 +563,8 @@ def test_access_json_format(topo_m2, setup_test):
'sn': RDN_TEST_USER,
'uidNumber': '1000',
'gidNumber': '2000',
- 'homeDirectory': f'/home/{RDN_TEST_USER}'
+ 'homeDirectory': f'/home/{RDN_TEST_USER}',
+ 'userpassword': 'password'
})
ssca_dir = inst.get_ssca_dir()
@@ -646,6 +631,83 @@ def test_access_json_format(topo_m2, setup_test):
assert event['msg'] == "failed to map client certificate to LDAP DN"
assert event['err_msg'] == "Certificate couldn't be mapped to an ldap entry"
+ #
+ # Extended op
+ #
+ log.info("Test EXTENDED_OP")
+ event = get_log_event(inst, "EXTENDED_OP", "oid",
+ "2.16.840.1.113730.3.5.12")
+ assert event is not None
+ assert event['oid_name'] == "REPL_START_NSDS90_REPLICATION_REQUEST_OID"
+ assert event['name'] == "replication-multisupplier-extop"
+
+ event = get_log_event(inst, "EXTENDED_OP", "oid",
+ "2.16.840.1.113730.3.5.5")
+ assert event is not None
+ assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID"
+ assert event['name'] == "replication-multisupplier-extop"
+
+ #
+ # Extended op info
+ #
+ log.info("Test EXTENDED_OP_INFO")
+ OLD_PASSWD = 'password'
+ NEW_PASSWD = 'newpassword'
+
+ assert inst.simple_bind_s(DN_DM, PASSWORD)
+
+ assert inst.passwd_s(user.dn, OLD_PASSWD, NEW_PASSWD)
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "name",
+ "passwd_modify_plugin")
+ assert event is not None
+ assert event['bind_dn'] == "cn=directory manager"
+ assert event['target_dn'] == user.dn.lower()
+ assert event['msg'] == "success"
+
+ # Test no such object
+ BAD_DN = user.dn + ",dc=not"
+ with pytest.raises(ldap.NO_SUCH_OBJECT):
+ inst.passwd_s(BAD_DN, OLD_PASSWD, NEW_PASSWD)
+
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "target_dn", BAD_DN)
+ assert event is not None
+ assert event['bind_dn'] == "cn=directory manager"
+ assert event['target_dn'] == BAD_DN.lower()
+ assert event['msg'] == "No such entry exists."
+
+ # Test invalid old password
+ with pytest.raises(ldap.INVALID_CREDENTIALS):
+ inst.passwd_s(user.dn, "not_the_old_pw", NEW_PASSWD)
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "err", 49)
+ assert event is not None
+ assert event['bind_dn'] == "cn=directory manager"
+ assert event['target_dn'] == user.dn.lower()
+ assert event['msg'] == "Invalid oldPasswd value."
+
+ # Test user without permissions
+ user2 = users.create(properties={
+ 'uid': RDN_TEST_USER + "2",
+ 'cn': RDN_TEST_USER + "2",
+ 'sn': RDN_TEST_USER + "2",
+ 'uidNumber': '1001',
+ 'gidNumber': '2001',
+ 'homeDirectory': f'/home/{RDN_TEST_USER + "2"}',
+ 'userpassword': 'password'
+ })
+ inst.simple_bind_s(user2.dn, 'password')
+ with pytest.raises(ldap.INSUFFICIENT_ACCESS):
+ inst.passwd_s(user.dn, NEW_PASSWD, OLD_PASSWD)
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "err", 50)
+ assert event is not None
+ assert event['bind_dn'] == user2.dn.lower()
+ assert event['target_dn'] == user.dn.lower()
+ assert event['msg'] == "Insufficient access rights"
+
+
+ # Reset bind
+ inst.simple_bind_s(DN_DM, PASSWORD)
+
+
if __name__ == '__main__':
# Run isolated
diff --git a/ldap/servers/slapd/accesslog.c b/ldap/servers/slapd/accesslog.c
index 072ace203..46228d4a1 100644
--- a/ldap/servers/slapd/accesslog.c
+++ b/ldap/servers/slapd/accesslog.c
@@ -1113,6 +1113,53 @@ slapd_log_access_extop(slapd_log_pblock *logpb)
return rc;
}
+/*
+ * Extended operation information
+ *
+ * int32_t log_format
+ * time_t conn_time
+ * uint64_t conn_id
+ * int32_t op_id
+ * const char *name
+ * const char *bind_dn
+ * const char *tartet_dn
+ * const char *msg
+ */
+int32_t
+slapd_log_access_extop_info(slapd_log_pblock *logpb)
+{
+ int32_t rc = 0;
+ char *msg = NULL;
+ json_object *json_obj = NULL;
+
+ if ((json_obj = build_base_obj(logpb, "EXTENDED_OP_INFO")) == NULL) {
+ return rc;
+ }
+
+ if (logpb->name) {
+ json_object_object_add(json_obj, "name", json_obj_add_str(logpb->name));
+ }
+ if (logpb->target_dn) {
+ json_object_object_add(json_obj, "target_dn", json_obj_add_str(logpb->target_dn));
+ }
+ if (logpb->bind_dn) {
+ json_object_object_add(json_obj, "bind_dn", json_obj_add_str(logpb->bind_dn));
+ }
+ if (logpb->msg) {
+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg));
+ }
+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err));
+
+ /* Convert json object to string and log it */
+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format);
+ rc = slapd_log_access_json(msg);
+
+ /* Done with JSON object - free it */
+ json_object_put(json_obj);
+
+ return rc;
+}
+
/*
* Sort
*
diff --git a/ldap/servers/slapd/passwd_extop.c b/ldap/servers/slapd/passwd_extop.c
index 4bb60afd6..69bb3494c 100644
--- a/ldap/servers/slapd/passwd_extop.c
+++ b/ldap/servers/slapd/passwd_extop.c
@@ -465,12 +465,14 @@ passwd_modify_extop(Slapi_PBlock *pb)
BerElement *response_ber = NULL;
Slapi_Entry *targetEntry = NULL;
Connection *conn = NULL;
+ Operation *pb_op = NULL;
LDAPControl **req_controls = NULL;
LDAPControl **resp_controls = NULL;
passwdPolicy *pwpolicy = NULL;
Slapi_DN *target_sdn = NULL;
Slapi_Entry *referrals = NULL;
- /* Slapi_DN sdn; */
+ Slapi_Backend *be = NULL;
+ int32_t log_format = config_get_accesslog_log_format();
slapi_log_err(SLAPI_LOG_TRACE, "passwd_modify_extop", "=>\n");
@@ -647,7 +649,7 @@ parse_req_done:
}
dn = slapi_sdn_get_ndn(target_sdn);
if (dn == NULL || *dn == '\0') {
- /* Refuse the operation because they're bound anonymously */
+ /* Invalid DN - refuse the operation */
errMesg = "Invalid dn.";
rc = LDAP_INVALID_DN_SYNTAX;
goto free_and_return;
@@ -724,14 +726,19 @@ parse_req_done:
ber_free(response_ber, 1);
}
- slapi_pblock_set(pb, SLAPI_ORIGINAL_TARGET, (void *)dn);
+ slapi_pblock_get(pb, SLAPI_OPERATION, &pb_op);
+ if (pb_op == NULL) {
+ slapi_log_err(SLAPI_LOG_ERR, "passwd_modify_extop", "pb_op is NULL\n");
+ goto free_and_return;
+ }
+ slapi_pblock_set(pb, SLAPI_ORIGINAL_TARGET, (void *)dn);
/* Now we have the DN, look for the entry */
ret = passwd_modify_getEntry(dn, &targetEntry);
/* If we can't find the entry, then that's an error */
if (ret) {
/* Couldn't find the entry, fail */
- errMesg = "No such Entry exists.";
+ errMesg = "No such entry exists.";
rc = LDAP_NO_SUCH_OBJECT;
goto free_and_return;
}
@@ -742,30 +749,18 @@ parse_req_done:
leak any useful information to the client such as current password
wrong, etc.
*/
- Operation *pb_op = NULL;
- slapi_pblock_get(pb, SLAPI_OPERATION, &pb_op);
- if (pb_op == NULL) {
- slapi_log_err(SLAPI_LOG_ERR, "passwd_modify_extop", "pb_op is NULL\n");
- goto free_and_return;
- }
-
operation_set_target_spec(pb_op, slapi_entry_get_sdn(targetEntry));
slapi_pblock_set(pb, SLAPI_REQUESTOR_ISROOT, &pb_op->o_isroot);
- /* In order to perform the access control check , we need to select a backend (even though
- * we don't actually need it otherwise).
- */
- {
- Slapi_Backend *be = NULL;
-
- be = slapi_mapping_tree_find_backend_for_sdn(slapi_entry_get_sdn(targetEntry));
- if (NULL == be) {
- errMesg = "Failed to find backend for target entry";
- rc = LDAP_OPERATIONS_ERROR;
- goto free_and_return;
- }
- slapi_pblock_set(pb, SLAPI_BACKEND, be);
+ /* In order to perform the access control check, we need to select a backend (even though
+ * we don't actually need it otherwise). */
+ be = slapi_mapping_tree_find_backend_for_sdn(slapi_entry_get_sdn(targetEntry));
+ if (NULL == be) {
+ errMesg = "Failed to find backend for target entry";
+ rc = LDAP_NO_SUCH_OBJECT;
+ goto free_and_return;
}
+ slapi_pblock_set(pb, SLAPI_BACKEND, be);
/* Check if the pwpolicy control is present */
slapi_pblock_get(pb, SLAPI_PWPOLICY, &need_pwpolicy_ctrl);
@@ -797,10 +792,7 @@ parse_req_done:
/* Check if password policy allows users to change their passwords. We need to do
* this here since the normal modify code doesn't perform this check for
* internal operations. */
-
- Connection *pb_conn;
- slapi_pblock_get(pb, SLAPI_CONNECTION, &pb_conn);
- if (!pb_op->o_isroot && !pb_conn->c_needpw && !pwpolicy->pw_change) {
+ if (!pb_op->o_isroot && !conn->c_needpw && !pwpolicy->pw_change) {
if (NULL == bindSDN) {
bindSDN = slapi_sdn_new_normdn_byref(bindDN);
}
@@ -848,6 +840,27 @@ free_and_return:
slapi_log_err(SLAPI_LOG_PLUGIN, "passwd_modify_extop",
"%s\n", errMesg ? errMesg : "success");
+ if (dn) {
+ /* Log the target ndn (if we have a target ndn) */
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ /* JSON logging */
+ slapd_log_pblock logpb = {0};
+ slapd_log_pblock_init(&logpb, log_format, pb);
+ logpb.name = "passwd_modify_plugin";
+ logpb.target_dn = dn;
+ logpb.bind_dn = bindDN;
+ logpb.msg = errMesg ? errMesg : "success";
+ logpb.err = rc;
+ slapd_log_access_extop_info(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " op=%d EXT_INFO name=\"passwd_modify_plugin\" bind_dn=\"%s\" target_dn=\"%s\" msg=\"%s\" rc=%d\n",
+ conn ? conn->c_connid : -1, pb_op ? pb_op->o_opid : -1,
+ bindDN ? bindDN : "", dn,
+ errMesg ? errMesg : "success", rc);
+ }
+ }
+
if ((rc == LDAP_REFERRAL) && (referrals)) {
send_referrals_from_entry(pb, referrals);
} else {
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
index da232ae2f..e9abf8b75 100644
--- a/ldap/servers/slapd/slapi-private.h
+++ b/ldap/servers/slapd/slapi-private.h
@@ -1652,6 +1652,7 @@ int32_t slapd_log_access_vlv(slapd_log_pblock *logpb);
int32_t slapd_log_access_entry(slapd_log_pblock *logpb);
int32_t slapd_log_access_referral(slapd_log_pblock *logpb);
int32_t slapd_log_access_extop(slapd_log_pblock *logpb);
+int32_t slapd_log_access_extop_info(slapd_log_pblock *logpb);
int32_t slapd_log_access_sort(slapd_log_pblock *logpb);
int32_t slapd_log_access_tls(slapd_log_pblock *logpb);
int32_t slapd_log_access_tls_client_auth(slapd_log_pblock *logpb);
--
2.49.0

View file

@ -1,53 +0,0 @@
From 048aa39d4c4955f6d9e3b018d4b1fc057f52d130 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Thu, 24 Jul 2025 19:09:40 +0200
Subject: [PATCH] Issue 6901 - Update changelog trimming logging
Description:
* Set SLAPI_LOG_ERR for message in `_cl5DispatchTrimThread`
* Set correct function name for logs in `_cl5TrimEntry`.
* Add number of scanned entries to the log.
Fixes: https://github.com/389ds/389-ds-base/issues/6901
Reviewed by: @mreynolds389, @progier389 (Thanks!)
---
ldap/servers/plugins/replication/cl5_api.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/ldap/servers/plugins/replication/cl5_api.c b/ldap/servers/plugins/replication/cl5_api.c
index 3c356abc0..1d62aa020 100644
--- a/ldap/servers/plugins/replication/cl5_api.c
+++ b/ldap/servers/plugins/replication/cl5_api.c
@@ -2007,7 +2007,7 @@ _cl5DispatchTrimThread(Replica *replica)
(void *)replica, PR_PRIORITY_NORMAL, PR_GLOBAL_THREAD,
PR_UNJOINABLE_THREAD, DEFAULT_THREAD_STACKSIZE);
if (NULL == pth) {
- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl,
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name_cl,
"_cl5DispatchTrimThread - Failed to create trimming thread for %s"
"; NSPR error - %d\n", replica_get_name(replica),
PR_GetError());
@@ -2788,7 +2788,7 @@ _cl5TrimEntry(dbi_val_t *key, dbi_val_t *data, void *ctx)
return DBI_RC_NOTFOUND;
} else {
slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl,
- "_cl5TrimReplica - Changelog purge skipped anchor csn %s\n",
+ "_cl5TrimEntry - Changelog purge skipped anchor csn %s\n",
(char*)key->data);
return DBI_RC_SUCCESS;
}
@@ -2867,8 +2867,8 @@ _cl5TrimReplica(Replica *r)
slapi_ch_free((void**)&dblcictx.rids);
if (dblcictx.changed.tot) {
- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimReplica - Trimmed %ld changes from the changelog\n",
- dblcictx.changed.tot);
+ slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimReplica - Scanned %ld records, and trimmed %ld changes from the changelog\n",
+ dblcictx.seen.tot, dblcictx.changed.tot);
}
}
--
2.49.0

File diff suppressed because it is too large Load diff

View file

@ -1,380 +0,0 @@
From 697f0ed364b8649141adc283a6a45702d815421e Mon Sep 17 00:00:00 2001
From: Akshay Adhikari <aadhikar@redhat.com>
Date: Mon, 28 Jul 2025 18:14:15 +0530
Subject: [PATCH] Issue 6663 - Fix NULL subsystem crash in JSON error logging
(#6883)
Description: Fixes crash in JSON error logging when subsystem is NULL.
Parametrized test case for better debugging.
Relates: https://github.com/389ds/389-ds-base/issues/6663
Reviewed by: @mreynolds389
---
.../tests/suites/clu/dsconf_logging.py | 168 ------------------
.../tests/suites/clu/dsconf_logging_test.py | 164 +++++++++++++++++
ldap/servers/slapd/log.c | 2 +-
3 files changed, 165 insertions(+), 169 deletions(-)
delete mode 100644 dirsrvtests/tests/suites/clu/dsconf_logging.py
create mode 100644 dirsrvtests/tests/suites/clu/dsconf_logging_test.py
diff --git a/dirsrvtests/tests/suites/clu/dsconf_logging.py b/dirsrvtests/tests/suites/clu/dsconf_logging.py
deleted file mode 100644
index 1c2f7fc2e..000000000
--- a/dirsrvtests/tests/suites/clu/dsconf_logging.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2025 Red Hat, Inc.
-# All rights reserved.
-#
-# License: GPL (version 3 or any later version).
-# See LICENSE for details.
-# --- END COPYRIGHT BLOCK ---
-#
-import json
-import subprocess
-import logging
-import pytest
-from lib389._constants import DN_DM
-from lib389.topologies import topology_st as topo
-
-pytestmark = pytest.mark.tier1
-
-log = logging.getLogger(__name__)
-
-SETTINGS = [
- ('logging-enabled', None),
- ('logging-disabled', None),
- ('mode', '700'),
- ('compress-enabled', None),
- ('compress-disabled', None),
- ('buffering-enabled', None),
- ('buffering-disabled', None),
- ('max-logs', '4'),
- ('max-logsize', '7'),
- ('rotation-interval', '2'),
- ('rotation-interval-unit', 'week'),
- ('rotation-tod-enabled', None),
- ('rotation-tod-disabled', None),
- ('rotation-tod-hour', '12'),
- ('rotation-tod-minute', '20'),
- ('deletion-interval', '3'),
- ('deletion-interval-unit', 'day'),
- ('max-disk-space', '20'),
- ('free-disk-space', '2'),
-]
-
-DEFAULT_TIME_FORMAT = "%FT%TZ"
-
-
-def execute_dsconf_command(dsconf_cmd, subcommands):
- """Execute dsconf command and return output and return code"""
-
- cmdline = dsconf_cmd + subcommands
- proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE)
- out, _ = proc.communicate()
- return out.decode('utf-8'), proc.returncode
-
-
-def get_dsconf_base_cmd(topo):
- """Return base dsconf command list"""
- return ['/usr/sbin/dsconf', topo.standalone.serverid,
- '-j', '-D', DN_DM, '-w', 'password', 'logging']
-
-
-def test_log_settings(topo):
- """Test each log setting can be set successfully
-
- :id: b800fd03-37f5-4e74-9af8-eeb07030eb52
- :setup: Standalone DS instance
- :steps:
- 1. Test each log's settings
- :expectedresults:
- 1. Success
- """
-
- dsconf_cmd = get_dsconf_base_cmd(topo)
- for log_type in ['access', 'audit', 'auditfail', 'error', 'security']:
- # Test "get" command
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'get'])
- assert rc == 0
- json_result = json.loads(output)
- default_location = json_result['Log name and location']
-
- # Log location
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
- 'location',
- f'/tmp/{log_type}'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
- 'location',
- default_location])
- assert rc == 0
-
- # Log levels
- if log_type == "access":
- # List levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'list-levels'])
- assert rc == 0
-
- # Set levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'internal'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'internal', 'entry'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'internal', 'default'])
- assert rc == 0
-
- if log_type == "error":
- # List levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'list-levels'])
- assert rc == 0
-
- # Set levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'plugin', 'replication'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'default'])
- assert rc == 0
-
- # Log formats
- if log_type in ["access", "audit", "error"]:
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'time-format', '%D'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'time-format',
- DEFAULT_TIME_FORMAT])
- assert rc == 0
-
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'log-format',
- 'json'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'log-format',
- 'default'])
- assert rc == 0
-
- # Audit log display attrs
- if log_type == "audit":
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'display-attrs', 'cn'])
- assert rc == 0
-
- # Common settings
- for attr, value in SETTINGS:
- if log_type == "auditfail" and attr.startswith("buffer"):
- # auditfail doesn't have a buffering settings
- continue
-
- if value is None:
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
- 'set', attr])
- else:
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
- 'set', attr, value])
- assert rc == 0
diff --git a/dirsrvtests/tests/suites/clu/dsconf_logging_test.py b/dirsrvtests/tests/suites/clu/dsconf_logging_test.py
new file mode 100644
index 000000000..ca3f71997
--- /dev/null
+++ b/dirsrvtests/tests/suites/clu/dsconf_logging_test.py
@@ -0,0 +1,164 @@
+# --- BEGIN COPYRIGHT BLOCK ---
+# Copyright (C) 2025 Red Hat, Inc.
+# All rights reserved.
+#
+# License: GPL (version 3 or any later version).
+# See LICENSE for details.
+# --- END COPYRIGHT BLOCK ---
+#
+import json
+import subprocess
+import logging
+import pytest
+from lib389._constants import DN_DM
+from lib389.topologies import topology_st as topo
+
+pytestmark = pytest.mark.tier1
+
+log = logging.getLogger(__name__)
+
+SETTINGS = [
+ ('logging-enabled', None),
+ ('logging-disabled', None),
+ ('mode', '700'),
+ ('compress-enabled', None),
+ ('compress-disabled', None),
+ ('buffering-enabled', None),
+ ('buffering-disabled', None),
+ ('max-logs', '4'),
+ ('max-logsize', '7'),
+ ('rotation-interval', '2'),
+ ('rotation-interval-unit', 'week'),
+ ('rotation-tod-enabled', None),
+ ('rotation-tod-disabled', None),
+ ('rotation-tod-hour', '12'),
+ ('rotation-tod-minute', '20'),
+ ('deletion-interval', '3'),
+ ('deletion-interval-unit', 'day'),
+ ('max-disk-space', '20'),
+ ('free-disk-space', '2'),
+]
+
+DEFAULT_TIME_FORMAT = "%FT%TZ"
+
+
+def execute_dsconf_command(dsconf_cmd, subcommands):
+ """Execute dsconf command and return output and return code"""
+
+ cmdline = dsconf_cmd + subcommands
+ proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ out, err = proc.communicate()
+
+ if proc.returncode != 0 and err:
+ log.error(f"Command failed: {' '.join(cmdline)}")
+ log.error(f"Stderr: {err.decode('utf-8')}")
+
+ return out.decode('utf-8'), proc.returncode
+
+
+def get_dsconf_base_cmd(topo):
+ """Return base dsconf command list"""
+ return ['/usr/sbin/dsconf', topo.standalone.serverid,
+ '-j', '-D', DN_DM, '-w', 'password', 'logging']
+
+
+@pytest.mark.parametrize("log_type", ['access', 'audit', 'auditfail', 'error', 'security'])
+def test_log_settings(topo, log_type):
+ """Test each log setting can be set successfully
+
+ :id: b800fd03-37f5-4e74-9af8-eeb07030eb52
+ :setup: Standalone DS instance
+ :steps:
+ 1. Test each log's settings
+ :expectedresults:
+ 1. Success
+ """
+
+ dsconf_cmd = get_dsconf_base_cmd(topo)
+
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'get'])
+ assert rc == 0
+ json_result = json.loads(output)
+ default_location = json_result['Log name and location']
+
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
+ 'location',
+ f'/tmp/{log_type}'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
+ 'location',
+ default_location])
+ assert rc == 0
+
+ if log_type == "access":
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'list-levels'])
+ assert rc == 0
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'internal'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'internal', 'entry'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'internal', 'default'])
+ assert rc == 0
+
+ if log_type == "error":
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'list-levels'])
+ assert rc == 0
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'plugin', 'replication'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'default'])
+ assert rc == 0
+
+ if log_type in ["access", "audit", "error"]:
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'time-format', '%D'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'time-format',
+ DEFAULT_TIME_FORMAT])
+ assert rc == 0
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'log-format',
+ 'json'])
+ assert rc == 0, f"Failed to set {log_type} log-format to json: {output}"
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'log-format',
+ 'default'])
+ assert rc == 0, f"Failed to set {log_type} log-format to default: {output}"
+
+ if log_type == "audit":
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'display-attrs', 'cn'])
+ assert rc == 0
+
+ for attr, value in SETTINGS:
+ if log_type == "auditfail" and attr.startswith("buffer"):
+ continue
+
+ if value is None:
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
+ 'set', attr])
+ else:
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
+ 'set', attr, value])
+ assert rc == 0
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index 06792a55a..91ba23047 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -2949,7 +2949,7 @@ vslapd_log_error(
json_obj = json_object_new_object();
json_object_object_add(json_obj, "local_time", json_object_new_string(local_time));
json_object_object_add(json_obj, "severity", json_object_new_string(get_log_sev_name(sev_level, sev_name)));
- json_object_object_add(json_obj, "subsystem", json_object_new_string(subsystem));
+ json_object_object_add(json_obj, "subsystem", json_object_new_string(subsystem ? subsystem : ""));
json_object_object_add(json_obj, "msg", json_object_new_string(vbuf));
PR_snprintf(buffer, sizeof(buffer), "%s\n",
--
2.49.0

View file

@ -1,98 +0,0 @@
From d3eee2527912785505feba9bedb6d0ae988c69e5 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 23 Jul 2025 19:35:32 -0400
Subject: [PATCH] Issue 6895 - Crash if repl keep alive entry can not be
created
Description:
Heap use after free when logging that the replicaton keep-alive entry can not
be created. slapi_add_internal_pb() frees the slapi entry, then
we try and get the dn from the entry and we get a use-after-free crash.
Relates: https://github.com/389ds/389-ds-base/issues/6895
Reviewed by: spichugi(Thanks!)
---
ldap/servers/plugins/chainingdb/cb_config.c | 3 +--
ldap/servers/plugins/posix-winsync/posix-winsync.c | 1 -
ldap/servers/plugins/replication/repl5_init.c | 3 ---
ldap/servers/plugins/replication/repl5_replica.c | 8 ++++----
4 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/ldap/servers/plugins/chainingdb/cb_config.c b/ldap/servers/plugins/chainingdb/cb_config.c
index 40a7088d7..24fa1bcb3 100644
--- a/ldap/servers/plugins/chainingdb/cb_config.c
+++ b/ldap/servers/plugins/chainingdb/cb_config.c
@@ -44,8 +44,7 @@ cb_config_add_dse_entries(cb_backend *cb, char **entries, char *string1, char *s
slapi_pblock_get(util_pb, SLAPI_PLUGIN_INTOP_RESULT, &res);
if (LDAP_SUCCESS != res && LDAP_ALREADY_EXISTS != res) {
slapi_log_err(SLAPI_LOG_ERR, CB_PLUGIN_SUBSYSTEM,
- "cb_config_add_dse_entries - Unable to add config entry (%s) to the DSE: %s\n",
- slapi_entry_get_dn(e),
+ "cb_config_add_dse_entries - Unable to add config entry to the DSE: %s\n",
ldap_err2string(res));
rc = res;
slapi_pblock_destroy(util_pb);
diff --git a/ldap/servers/plugins/posix-winsync/posix-winsync.c b/ldap/servers/plugins/posix-winsync/posix-winsync.c
index 51a55b643..3a002bb70 100644
--- a/ldap/servers/plugins/posix-winsync/posix-winsync.c
+++ b/ldap/servers/plugins/posix-winsync/posix-winsync.c
@@ -1626,7 +1626,6 @@ posix_winsync_end_update_cb(void *cbdata __attribute__((unused)),
"posix_winsync_end_update_cb: "
"add task entry\n");
}
- /* slapi_entry_free(e_task); */
slapi_pblock_destroy(pb);
pb = NULL;
posix_winsync_config_reset_MOFTaskCreated();
diff --git a/ldap/servers/plugins/replication/repl5_init.c b/ldap/servers/plugins/replication/repl5_init.c
index 8bc0b5372..5047fb8dc 100644
--- a/ldap/servers/plugins/replication/repl5_init.c
+++ b/ldap/servers/plugins/replication/repl5_init.c
@@ -682,7 +682,6 @@ create_repl_schema_policy(void)
repl_schema_top,
ldap_err2string(return_value));
rc = -1;
- slapi_entry_free(e); /* The entry was not consumed */
goto done;
}
slapi_pblock_destroy(pb);
@@ -703,7 +702,6 @@ create_repl_schema_policy(void)
repl_schema_supplier,
ldap_err2string(return_value));
rc = -1;
- slapi_entry_free(e); /* The entry was not consumed */
goto done;
}
slapi_pblock_destroy(pb);
@@ -724,7 +722,6 @@ create_repl_schema_policy(void)
repl_schema_consumer,
ldap_err2string(return_value));
rc = -1;
- slapi_entry_free(e); /* The entry was not consumed */
goto done;
}
slapi_pblock_destroy(pb);
diff --git a/ldap/servers/plugins/replication/repl5_replica.c b/ldap/servers/plugins/replication/repl5_replica.c
index 59062b46b..a97c807e9 100644
--- a/ldap/servers/plugins/replication/repl5_replica.c
+++ b/ldap/servers/plugins/replication/repl5_replica.c
@@ -465,10 +465,10 @@ replica_subentry_create(const char *repl_root, ReplicaId rid)
if (return_value != LDAP_SUCCESS &&
return_value != LDAP_ALREADY_EXISTS &&
return_value != LDAP_REFERRAL /* CONSUMER */) {
- slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - Unable to "
- "create replication keep alive entry %s: error %d - %s\n",
- slapi_entry_get_dn_const(e),
- return_value, ldap_err2string(return_value));
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - "
+ "Unable to create replication keep alive entry 'cn=%s %d,%s': error %d - %s\n",
+ KEEP_ALIVE_ENTRY, rid, repl_root,
+ return_value, ldap_err2string(return_value));
rc = -1;
goto done;
}
--
2.49.0

View file

@ -1,814 +0,0 @@
From e430e1849d40387714fd4c91613eb4bb11f211bb Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Mon, 28 Jul 2025 15:41:29 -0700
Subject: [PATCH] Issue 6884 - Mask password hashes in audit logs (#6885)
Description: Fix the audit log functionality to mask password hash values for
userPassword, nsslapd-rootpw, nsmultiplexorcredentials, nsds5ReplicaCredentials,
and nsds5ReplicaBootstrapCredentials attributes in ADD and MODIFY operations.
Update auditlog.c to detect password attributes and replace their values with
asterisks (**********************) in both LDIF and JSON audit log formats.
Add a comprehensive test suite audit_password_masking_test.py to verify
password masking works correctly across all log formats and operation types.
Fixes: https://github.com/389ds/389-ds-base/issues/6884
Reviewed by: @mreynolds389, @vashirov (Thanks!!)
---
.../logging/audit_password_masking_test.py | 501 ++++++++++++++++++
ldap/servers/slapd/auditlog.c | 170 +++++-
ldap/servers/slapd/slapi-private.h | 1 +
src/lib389/lib389/chaining.py | 3 +-
4 files changed, 652 insertions(+), 23 deletions(-)
create mode 100644 dirsrvtests/tests/suites/logging/audit_password_masking_test.py
diff --git a/dirsrvtests/tests/suites/logging/audit_password_masking_test.py b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py
new file mode 100644
index 000000000..3b6a54849
--- /dev/null
+++ b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py
@@ -0,0 +1,501 @@
+# --- BEGIN COPYRIGHT BLOCK ---
+# Copyright (C) 2025 Red Hat, Inc.
+# All rights reserved.
+#
+# License: GPL (version 3 or any later version).
+# See LICENSE for details.
+# --- END COPYRIGHT BLOCK ---
+#
+import logging
+import pytest
+import os
+import re
+import time
+import ldap
+from lib389._constants import DEFAULT_SUFFIX, DN_DM, PW_DM
+from lib389.topologies import topology_m2 as topo
+from lib389.idm.user import UserAccounts
+from lib389.dirsrv_log import DirsrvAuditJSONLog
+from lib389.plugins import ChainingBackendPlugin
+from lib389.chaining import ChainingLinks
+from lib389.agreement import Agreements
+from lib389.replica import ReplicationManager, Replicas
+from lib389.idm.directorymanager import DirectoryManager
+
+log = logging.getLogger(__name__)
+
+MASKED_PASSWORD = "**********************"
+TEST_PASSWORD = "MySecret123"
+TEST_PASSWORD_2 = "NewPassword789"
+TEST_PASSWORD_3 = "NewPassword101"
+
+
+def setup_audit_logging(inst, log_format='default', display_attrs=None):
+ """Configure audit logging settings"""
+ inst.config.replace('nsslapd-auditlog-logbuffering', 'off')
+ inst.config.replace('nsslapd-auditlog-logging-enabled', 'on')
+ inst.config.replace('nsslapd-auditlog-log-format', log_format)
+
+ if display_attrs is not None:
+ inst.config.replace('nsslapd-auditlog-display-attrs', display_attrs)
+
+ inst.deleteAuditLogs()
+
+
+def check_password_masked(inst, log_format, expected_password, actual_password):
+ """Helper function to check password masking in audit logs"""
+
+ time.sleep(1) # Allow log to flush
+
+ # List of all password/credential attributes that should be masked
+ password_attributes = [
+ 'userPassword',
+ 'nsslapd-rootpw',
+ 'nsmultiplexorcredentials',
+ 'nsDS5ReplicaCredentials',
+ 'nsDS5ReplicaBootstrapCredentials'
+ ]
+
+ # Get password schemes to check for hash leakage
+ user_password_scheme = inst.config.get_attr_val_utf8('passwordStorageScheme')
+ root_password_scheme = inst.config.get_attr_val_utf8('nsslapd-rootpwstoragescheme')
+
+ if log_format == 'json':
+ # Check JSON format logs
+ audit_log = DirsrvAuditJSONLog(inst)
+ log_lines = audit_log.readlines()
+
+ found_masked = False
+ found_actual = False
+ found_hashed = False
+
+ for line in log_lines:
+ # Check if any password attribute is present in the line
+ for attr in password_attributes:
+ if attr in line:
+ if expected_password in line:
+ found_masked = True
+ if actual_password in line:
+ found_actual = True
+ # Check for password scheme indicators (hashed passwords)
+ if user_password_scheme and f'{{{user_password_scheme}}}' in line:
+ found_hashed = True
+ if root_password_scheme and f'{{{root_password_scheme}}}' in line:
+ found_hashed = True
+ break # Found a password attribute, no need to check others for this line
+
+ else:
+ # Check LDIF format logs
+ found_masked = False
+ found_actual = False
+ found_hashed = False
+
+ # Check each password attribute for masked password
+ for attr in password_attributes:
+ if inst.ds_audit_log.match(f"{attr}: {re.escape(expected_password)}"):
+ found_masked = True
+ if inst.ds_audit_log.match(f"{attr}: {actual_password}"):
+ found_actual = True
+
+ # Check for hashed passwords in LDIF format
+ if user_password_scheme:
+ if inst.ds_audit_log.match(f"userPassword: {{{user_password_scheme}}}"):
+ found_hashed = True
+ if root_password_scheme:
+ if inst.ds_audit_log.match(f"nsslapd-rootpw: {{{root_password_scheme}}}"):
+ found_hashed = True
+
+ # Delete audit logs to avoid interference with other tests
+ # We need to reset the root password to default as deleteAuditLogs()
+ # opens a new connection with the default password
+ dm = DirectoryManager(inst)
+ dm.change_password(PW_DM)
+ inst.deleteAuditLogs()
+
+ return found_masked, found_actual, found_hashed
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "userPassword"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "userPassword")
+])
+def test_password_masking_add_operation(topo, log_format, display_attrs):
+ """Test password masking in ADD operations
+
+ :id: 4358bd75-bcc7-401c-b492-d3209b10412d
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Add user with password
+ 3. Check that password is masked in audit log
+ 4. Verify actual password does not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Password should be masked with asterisks
+ 4. Actual password should not be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ user = None
+
+ try:
+ user = users.create(properties={
+ 'uid': 'test_add_pwd_mask',
+ 'cn': 'Test Add User',
+ 'sn': 'User',
+ 'uidNumber': '1000',
+ 'gidNumber': '1000',
+ 'homeDirectory': '/home/test_add',
+ 'userPassword': TEST_PASSWORD
+ })
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+
+ assert found_masked, f"Masked password not found in {log_format} ADD operation"
+ assert not found_actual, f"Actual password found in {log_format} ADD log (should be masked)"
+ assert not found_hashed, f"Hashed password found in {log_format} ADD log (should be masked)"
+
+ finally:
+ if user is not None:
+ try:
+ user.delete()
+ except:
+ pass
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "userPassword"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "userPassword")
+])
+def test_password_masking_modify_operation(topo, log_format, display_attrs):
+ """Test password masking in MODIFY operations
+
+ :id: e6963aa9-7609-419c-aae2-1d517aa434bd
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Add user without password
+ 3. Add password via MODIFY operation
+ 4. Check that password is masked in audit log
+ 5. Modify password to new value
+ 6. Check that new password is also masked
+ 7. Verify actual passwords do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Password should be masked with asterisks
+ 5. Success
+ 6. New password should be masked with asterisks
+ 7. No actual password values should be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ user = None
+
+ try:
+ user = users.create(properties={
+ 'uid': 'test_modify_pwd_mask',
+ 'cn': 'Test Modify User',
+ 'sn': 'User',
+ 'uidNumber': '2000',
+ 'gidNumber': '2000',
+ 'homeDirectory': '/home/test_modify'
+ })
+
+ user.replace('userPassword', TEST_PASSWORD)
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked password not found in {log_format} MODIFY operation (first password)"
+ assert not found_actual, f"Actual password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed, f"Hashed password found in {log_format} MODIFY log (should be masked)"
+
+ user.replace('userPassword', TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked password not found in {log_format} MODIFY operation (second password)"
+ assert not found_actual_2, f"Second actual password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Second hashed password found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ if user is not None:
+ try:
+ user.delete()
+ except:
+ pass
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsslapd-rootpw"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsslapd-rootpw")
+])
+def test_password_masking_rootpw_modify_operation(topo, log_format, display_attrs):
+ """Test password masking for nsslapd-rootpw MODIFY operations
+
+ :id: ec8c9fd4-56ba-4663-ab65-58efb3b445e4
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Modify nsslapd-rootpw in configuration
+ 3. Check that root password is masked in audit log
+ 4. Modify root password to new value
+ 5. Check that new root password is also masked
+ 6. Verify actual root passwords do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Root password should be masked with asterisks
+ 4. Success
+ 5. New root password should be masked with asterisks
+ 6. No actual root password values should be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+ dm = DirectoryManager(inst)
+
+ try:
+ dm.change_password(TEST_PASSWORD)
+ dm.rebind(TEST_PASSWORD)
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked root password not found in {log_format} MODIFY operation (first root password)"
+ assert not found_actual, f"Actual root password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed, f"Hashed root password found in {log_format} MODIFY log (should be masked)"
+
+ dm.change_password(TEST_PASSWORD_2)
+ dm.rebind(TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked root password not found in {log_format} MODIFY operation (second root password)"
+ assert not found_actual_2, f"Second actual root password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Second hashed root password found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ dm.change_password(PW_DM)
+ dm.rebind(PW_DM)
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsmultiplexorcredentials"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsmultiplexorcredentials")
+])
+def test_password_masking_multiplexor_credentials(topo, log_format, display_attrs):
+ """Test password masking for nsmultiplexorcredentials in chaining/multiplexor configurations
+
+ :id: 161a9498-b248-4926-90be-a696a36ed36e
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Create a chaining backend configuration entry with nsmultiplexorcredentials
+ 3. Check that multiplexor credentials are masked in audit log
+ 4. Modify the credentials
+ 5. Check that updated credentials are also masked
+ 6. Verify actual credentials do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Multiplexor credentials should be masked with asterisks
+ 4. Success
+ 5. Updated credentials should be masked with asterisks
+ 6. No actual credential values should be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+
+ # Enable chaining plugin and create chaining link
+ chain_plugin = ChainingBackendPlugin(inst)
+ chain_plugin.enable()
+
+ chains = ChainingLinks(inst)
+ chain = None
+
+ try:
+ # Create chaining link with multiplexor credentials
+ chain = chains.create(properties={
+ 'cn': 'testchain',
+ 'nsfarmserverurl': 'ldap://localhost:389/',
+ 'nsslapd-suffix': 'dc=example,dc=com',
+ 'nsmultiplexorbinddn': 'cn=manager',
+ 'nsmultiplexorcredentials': TEST_PASSWORD,
+ 'nsCheckLocalACI': 'on',
+ 'nsConnectionLife': '30',
+ })
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked multiplexor credentials not found in {log_format} ADD operation"
+ assert not found_actual, f"Actual multiplexor credentials found in {log_format} ADD log (should be masked)"
+ assert not found_hashed, f"Hashed multiplexor credentials found in {log_format} ADD log (should be masked)"
+
+ # Modify the credentials
+ chain.replace('nsmultiplexorcredentials', TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked multiplexor credentials not found in {log_format} MODIFY operation"
+ assert not found_actual_2, f"Actual multiplexor credentials found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Hashed multiplexor credentials found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ chain_plugin.disable()
+ if chain is not None:
+ inst.delete_branch_s(chain.dn, ldap.SCOPE_ONELEVEL)
+ chain.delete()
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsDS5ReplicaCredentials"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsDS5ReplicaCredentials")
+])
+def test_password_masking_replica_credentials(topo, log_format, display_attrs):
+ """Test password masking for nsDS5ReplicaCredentials in replication agreements
+
+ :id: 7bf9e612-1b7c-49af-9fc0-de4c7df84b2a
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Create a replication agreement entry with nsDS5ReplicaCredentials
+ 3. Check that replica credentials are masked in audit log
+ 4. Modify the credentials
+ 5. Check that updated credentials are also masked
+ 6. Verify actual credentials do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Replica credentials should be masked with asterisks
+ 4. Success
+ 5. Updated credentials should be masked with asterisks
+ 6. No actual credential values should be found in log
+ """
+ inst = topo.ms['supplier2']
+ setup_audit_logging(inst, log_format, display_attrs)
+ agmt = None
+
+ try:
+ replicas = Replicas(inst)
+ replica = replicas.get(DEFAULT_SUFFIX)
+ agmts = replica.get_agreements()
+ agmt = agmts.create(properties={
+ 'cn': 'testagmt',
+ 'nsDS5ReplicaHost': 'localhost',
+ 'nsDS5ReplicaPort': '389',
+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config',
+ 'nsDS5ReplicaCredentials': TEST_PASSWORD,
+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX
+ })
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked replica credentials not found in {log_format} ADD operation"
+ assert not found_actual, f"Actual replica credentials found in {log_format} ADD log (should be masked)"
+ assert not found_hashed, f"Hashed replica credentials found in {log_format} ADD log (should be masked)"
+
+ # Modify the credentials
+ agmt.replace('nsDS5ReplicaCredentials', TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked replica credentials not found in {log_format} MODIFY operation"
+ assert not found_actual_2, f"Actual replica credentials found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Hashed replica credentials found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ if agmt is not None:
+ agmt.delete()
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsDS5ReplicaBootstrapCredentials"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsDS5ReplicaBootstrapCredentials")
+])
+def test_password_masking_bootstrap_credentials(topo, log_format, display_attrs):
+ """Test password masking for nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials in replication agreements
+
+ :id: 248bd418-ffa4-4733-963d-2314c60b7c5b
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Create a replication agreement entry with both nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials
+ 3. Check that both credentials are masked in audit log
+ 4. Modify both credentials
+ 5. Check that both updated credentials are also masked
+ 6. Verify actual credentials do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Both credentials should be masked with asterisks
+ 4. Success
+ 5. Both updated credentials should be masked with asterisks
+ 6. No actual credential values should be found in log
+ """
+ inst = topo.ms['supplier2']
+ setup_audit_logging(inst, log_format, display_attrs)
+ agmt = None
+
+ try:
+ replicas = Replicas(inst)
+ replica = replicas.get(DEFAULT_SUFFIX)
+ agmts = replica.get_agreements()
+ agmt = agmts.create(properties={
+ 'cn': 'testbootstrapagmt',
+ 'nsDS5ReplicaHost': 'localhost',
+ 'nsDS5ReplicaPort': '389',
+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config',
+ 'nsDS5ReplicaCredentials': TEST_PASSWORD,
+ 'nsDS5replicabootstrapbinddn': 'cn=bootstrap manager,cn=config',
+ 'nsDS5ReplicaBootstrapCredentials': TEST_PASSWORD_2,
+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX
+ })
+
+ found_masked_bootstrap, found_actual_bootstrap, found_hashed_bootstrap = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_bootstrap, f"Masked bootstrap credentials not found in {log_format} ADD operation"
+ assert not found_actual_bootstrap, f"Actual bootstrap credentials found in {log_format} ADD log (should be masked)"
+ assert not found_hashed_bootstrap, f"Hashed bootstrap credentials found in {log_format} ADD log (should be masked)"
+
+ agmt.replace('nsDS5ReplicaBootstrapCredentials', TEST_PASSWORD_3)
+
+ found_masked_bootstrap_2, found_actual_bootstrap_2, found_hashed_bootstrap_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_3)
+ assert found_masked_bootstrap_2, f"Masked bootstrap credentials not found in {log_format} MODIFY operation"
+ assert not found_actual_bootstrap_2, f"Actual bootstrap credentials found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_bootstrap_2, f"Hashed bootstrap credentials found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ if agmt is not None:
+ agmt.delete()
+
+
+
+if __name__ == '__main__':
+ CURRENT_FILE = os.path.realpath(__file__)
+ pytest.main(["-s", CURRENT_FILE])
\ No newline at end of file
diff --git a/ldap/servers/slapd/auditlog.c b/ldap/servers/slapd/auditlog.c
index 1121aef35..7b591e072 100644
--- a/ldap/servers/slapd/auditlog.c
+++ b/ldap/servers/slapd/auditlog.c
@@ -39,6 +39,89 @@ static void write_audit_file(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
static const char *modrdn_changes[4];
+/* Helper function to check if an attribute is a password that needs masking */
+static int
+is_password_attribute(const char *attr_name)
+{
+ return (strcasecmp(attr_name, SLAPI_USERPWD_ATTR) == 0 ||
+ strcasecmp(attr_name, CONFIG_ROOTPW_ATTRIBUTE) == 0 ||
+ strcasecmp(attr_name, SLAPI_MB_CREDENTIALS) == 0 ||
+ strcasecmp(attr_name, SLAPI_REP_CREDENTIALS) == 0 ||
+ strcasecmp(attr_name, SLAPI_REP_BOOTSTRAP_CREDENTIALS) == 0);
+}
+
+/* Helper function to create a masked string representation of an entry */
+static char *
+create_masked_entry_string(Slapi_Entry *original_entry, int *len)
+{
+ Slapi_Attr *attr = NULL;
+ char *entry_str = NULL;
+ char *current_pos = NULL;
+ char *line_start = NULL;
+ char *next_line = NULL;
+ char *colon_pos = NULL;
+ int has_password_attrs = 0;
+
+ if (original_entry == NULL) {
+ return NULL;
+ }
+
+ /* Single pass through attributes to check for password attributes */
+ for (slapi_entry_first_attr(original_entry, &attr); attr != NULL;
+ slapi_entry_next_attr(original_entry, attr, &attr)) {
+
+ char *attr_name = NULL;
+ slapi_attr_get_type(attr, &attr_name);
+
+ if (is_password_attribute(attr_name)) {
+ has_password_attrs = 1;
+ break;
+ }
+ }
+
+ /* If no password attributes, return original string - no masking needed */
+ entry_str = slapi_entry2str(original_entry, len);
+ if (!has_password_attrs) {
+ return entry_str;
+ }
+
+ /* Process the string in-place, replacing password values */
+ current_pos = entry_str;
+ while ((line_start = current_pos) != NULL && *line_start != '\0') {
+ /* Find the end of current line */
+ next_line = strchr(line_start, '\n');
+ if (next_line != NULL) {
+ *next_line = '\0'; /* Temporarily terminate line */
+ current_pos = next_line + 1;
+ } else {
+ current_pos = NULL; /* Last line */
+ }
+
+ /* Find the colon that separates attribute name from value */
+ colon_pos = strchr(line_start, ':');
+ if (colon_pos != NULL) {
+ char saved_colon = *colon_pos;
+ *colon_pos = '\0'; /* Temporarily null-terminate attribute name */
+
+ /* Check if this is a password attribute that needs masking */
+ if (is_password_attribute(line_start)) {
+ strcpy(colon_pos + 1, " **********************");
+ }
+
+ *colon_pos = saved_colon; /* Restore colon */
+ }
+
+ /* Restore newline if it was there */
+ if (next_line != NULL) {
+ *next_line = '\n';
+ }
+ }
+
+ /* Update length since we may have shortened the string */
+ *len = strlen(entry_str);
+ return entry_str; /* Return the modified original string */
+}
+
void
write_audit_log_entry(Slapi_PBlock *pb)
{
@@ -282,10 +365,31 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object
{
slapi_entry_attr_find(entry, req_attr, &entry_attr);
if (entry_attr) {
- if (use_json) {
- log_entry_attr_json(entry_attr, req_attr, id_list);
+ if (strcmp(req_attr, PSEUDO_ATTR_UNHASHEDUSERPASSWORD) == 0) {
+ /* Do not write the unhashed clear-text password */
+ continue;
+ }
+
+ /* Check if this is a password attribute that needs masking */
+ if (is_password_attribute(req_attr)) {
+ /* userpassword/rootdn password - mask the value */
+ if (use_json) {
+ json_object *secret_obj = json_object_new_object();
+ json_object_object_add(secret_obj, req_attr,
+ json_object_new_string("**********************"));
+ json_object_array_add(id_list, secret_obj);
+ } else {
+ addlenstr(l, "#");
+ addlenstr(l, req_attr);
+ addlenstr(l, ": **********************\n");
+ }
} else {
- log_entry_attr(entry_attr, req_attr, l);
+ /* Regular attribute - log normally */
+ if (use_json) {
+ log_entry_attr_json(entry_attr, req_attr, id_list);
+ } else {
+ log_entry_attr(entry_attr, req_attr, l);
+ }
}
}
}
@@ -300,9 +404,7 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object
continue;
}
- if (strcasecmp(attr, SLAPI_USERPWD_ATTR) == 0 ||
- strcasecmp(attr, CONFIG_ROOTPW_ATTRIBUTE) == 0)
- {
+ if (is_password_attribute(attr)) {
/* userpassword/rootdn password - mask the value */
if (use_json) {
json_object *secret_obj = json_object_new_object();
@@ -312,7 +414,7 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object
} else {
addlenstr(l, "#");
addlenstr(l, attr);
- addlenstr(l, ": ****************************\n");
+ addlenstr(l, ": **********************\n");
}
continue;
}
@@ -481,6 +583,9 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
}
}
+ /* Check if this is a password attribute that needs masking */
+ int is_password_attr = is_password_attribute(mods[j]->mod_type);
+
mod = json_object_new_object();
switch (operationtype) {
case LDAP_MOD_ADD:
@@ -505,7 +610,12 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
json_object *val_list = NULL;
val_list = json_object_new_array();
for (size_t i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
- json_object_array_add(val_list, json_object_new_string(mods[j]->mod_bvalues[i]->bv_val));
+ if (is_password_attr) {
+ /* Mask password values */
+ json_object_array_add(val_list, json_object_new_string("**********************"));
+ } else {
+ json_object_array_add(val_list, json_object_new_string(mods[j]->mod_bvalues[i]->bv_val));
+ }
}
json_object_object_add(mod, "values", val_list);
}
@@ -517,8 +627,11 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
}
case SLAPI_OPERATION_ADD: {
int len;
+
e = change;
- tmp = slapi_entry2str(e, &len);
+
+ /* Create a masked string representation for password attributes */
+ tmp = create_masked_entry_string(e, &len);
tmpsave = tmp;
while ((tmp = strchr(tmp, '\n')) != NULL) {
tmp++;
@@ -665,6 +778,10 @@ write_audit_file(
break;
}
}
+
+ /* Check if this is a password attribute that needs masking */
+ int is_password_attr = is_password_attribute(mods[j]->mod_type);
+
switch (operationtype) {
case LDAP_MOD_ADD:
addlenstr(l, "add: ");
@@ -689,18 +806,27 @@ write_audit_file(
break;
}
if (operationtype != LDAP_MOD_IGNORE) {
- for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
- char *buf, *bufp;
- len = strlen(mods[j]->mod_type);
- len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1;
- buf = slapi_ch_malloc(len);
- bufp = buf;
- slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type,
- mods[j]->mod_bvalues[i]->bv_val,
- mods[j]->mod_bvalues[i]->bv_len, 0);
- *bufp = '\0';
- addlenstr(l, buf);
- slapi_ch_free((void **)&buf);
+ if (is_password_attr) {
+ /* Add masked password */
+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
+ addlenstr(l, mods[j]->mod_type);
+ addlenstr(l, ": **********************\n");
+ }
+ } else {
+ /* Add actual values for non-password attributes */
+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
+ char *buf, *bufp;
+ len = strlen(mods[j]->mod_type);
+ len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1;
+ buf = slapi_ch_malloc(len);
+ bufp = buf;
+ slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type,
+ mods[j]->mod_bvalues[i]->bv_val,
+ mods[j]->mod_bvalues[i]->bv_len, 0);
+ *bufp = '\0';
+ addlenstr(l, buf);
+ slapi_ch_free((void **)&buf);
+ }
}
}
addlenstr(l, "-\n");
@@ -711,7 +837,7 @@ write_audit_file(
e = change;
addlenstr(l, attr_changetype);
addlenstr(l, ": add\n");
- tmp = slapi_entry2str(e, &len);
+ tmp = create_masked_entry_string(e, &len);
tmpsave = tmp;
while ((tmp = strchr(tmp, '\n')) != NULL) {
tmp++;
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
index e9abf8b75..02f22fd2d 100644
--- a/ldap/servers/slapd/slapi-private.h
+++ b/ldap/servers/slapd/slapi-private.h
@@ -848,6 +848,7 @@ void task_cleanup(void);
/* for reversible encyrption */
#define SLAPI_MB_CREDENTIALS "nsmultiplexorcredentials"
#define SLAPI_REP_CREDENTIALS "nsds5ReplicaCredentials"
+#define SLAPI_REP_BOOTSTRAP_CREDENTIALS "nsds5ReplicaBootstrapCredentials"
int pw_rever_encode(Slapi_Value **vals, char *attr_name);
int pw_rever_decode(char *cipher, char **plain, const char *attr_name);
diff --git a/src/lib389/lib389/chaining.py b/src/lib389/lib389/chaining.py
index 533b83ebf..33ae78c8b 100644
--- a/src/lib389/lib389/chaining.py
+++ b/src/lib389/lib389/chaining.py
@@ -134,7 +134,7 @@ class ChainingLink(DSLdapObject):
"""
# Create chaining entry
- super(ChainingLink, self).create(rdn, properties, basedn)
+ link = super(ChainingLink, self).create(rdn, properties, basedn)
# Create mapping tree entry
dn_comps = ldap.explode_dn(properties['nsslapd-suffix'][0])
@@ -149,6 +149,7 @@ class ChainingLink(DSLdapObject):
self._mts.ensure_state(properties=mt_properties)
except ldap.ALREADY_EXISTS:
pass
+ return link
class ChainingLinks(DSLdapObjects):
--
2.49.0

View file

@ -1,262 +0,0 @@
From 572fe6c91fda1c2cfd3afee894c922edccf9c1f1 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Wed, 16 Jul 2025 11:22:30 +0200
Subject: [PATCH] Issue 6778 - Memory leak in
roles_cache_create_object_from_entry part 2
Bug Description:
Everytime a role with scope DN is processed, we leak rolescopeDN.
Fix Description:
* Initialize all pointer variables to NULL
* Add additional NULL checks
* Free rolescopeDN
* Move test_rewriter_with_invalid_filter before the DB contains 90k entries
* Use task.wait() for import task completion instead of parsing logs,
increase the timeout
Fixes: https://github.com/389ds/389-ds-base/issues/6778
Reviewed by: @progier389 (Thanks!)
---
dirsrvtests/tests/suites/roles/basic_test.py | 164 +++++++++----------
ldap/servers/plugins/roles/roles_cache.c | 10 +-
2 files changed, 82 insertions(+), 92 deletions(-)
diff --git a/dirsrvtests/tests/suites/roles/basic_test.py b/dirsrvtests/tests/suites/roles/basic_test.py
index d92d6f0c3..ec208bae9 100644
--- a/dirsrvtests/tests/suites/roles/basic_test.py
+++ b/dirsrvtests/tests/suites/roles/basic_test.py
@@ -510,6 +510,76 @@ def test_vattr_on_managed_role(topo, request):
request.addfinalizer(fin)
+def test_rewriter_with_invalid_filter(topo, request):
+ """Test that server does not crash when having
+ invalid filter in filtered role
+
+ :id: 5013b0b2-0af6-11f0-8684-482ae39447e5
+ :setup: standalone server
+ :steps:
+ 1. Setup filtered role with good filter
+ 2. Setup nsrole rewriter
+ 3. Restart the server
+ 4. Search for entries
+ 5. Setup filtered role with bad filter
+ 6. Search for entries
+ :expectedresults:
+ 1. Operation should succeed
+ 2. Operation should succeed
+ 3. Operation should succeed
+ 4. Operation should succeed
+ 5. Operation should succeed
+ 6. Operation should succeed
+ """
+ inst = topo.standalone
+ entries = []
+
+ def fin():
+ inst.start()
+ for entry in entries:
+ entry.delete()
+ request.addfinalizer(fin)
+
+ # Setup filtered role
+ roles = FilteredRoles(inst, f'ou=people,{DEFAULT_SUFFIX}')
+ filter_ko = '(&((objectClass=top)(objectClass=nsPerson))'
+ filter_ok = '(&(objectClass=top)(objectClass=nsPerson))'
+ role_properties = {
+ 'cn': 'TestFilteredRole',
+ 'nsRoleFilter': filter_ok,
+ 'description': 'Test good filter',
+ }
+ role = roles.create(properties=role_properties)
+ entries.append(role)
+
+ # Setup nsrole rewriter
+ rewriters = Rewriters(inst)
+ rewriter_properties = {
+ "cn": "nsrole",
+ "nsslapd-libpath": 'libroles-plugin',
+ "nsslapd-filterrewriter": 'role_nsRole_filter_rewriter',
+ }
+ rewriter = rewriters.ensure_state(properties=rewriter_properties)
+ entries.append(rewriter)
+
+ # Restart thge instance
+ inst.restart()
+
+ # Search for entries
+ entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
+
+ # Set bad filter
+ role_properties = {
+ 'cn': 'TestFilteredRole',
+ 'nsRoleFilter': filter_ko,
+ 'description': 'Test bad filter',
+ }
+ role.ensure_state(properties=role_properties)
+
+ # Search for entries
+ entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
+
+
def test_managed_and_filtered_role_rewrite(topo, request):
"""Test that filter components containing 'nsrole=xxx'
are reworked if xxx is either a filtered role or a managed
@@ -581,17 +651,11 @@ def test_managed_and_filtered_role_rewrite(topo, request):
PARENT="ou=people,%s" % DEFAULT_SUFFIX
dbgen_users(topo.standalone, 90000, import_ldif, DEFAULT_SUFFIX, entry_name=RDN, generic=True, parent=PARENT)
- # online import
+ # Online import
import_task = ImportTask(topo.standalone)
import_task.import_suffix_from_ldif(ldiffile=import_ldif, suffix=DEFAULT_SUFFIX)
- # Check for up to 200sec that the completion
- for i in range(1, 20):
- if len(topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9000.*')) > 0:
- break
- time.sleep(10)
- import_complete = topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9000.*')
- assert (len(import_complete) == 1)
-
+ import_task.wait(timeout=400)
+ assert import_task.get_exit_code() == 0
# Restart server
topo.standalone.restart()
@@ -715,17 +779,11 @@ def test_not_such_entry_role_rewrite(topo, request):
PARENT="ou=people,%s" % DEFAULT_SUFFIX
dbgen_users(topo.standalone, 91000, import_ldif, DEFAULT_SUFFIX, entry_name=RDN, generic=True, parent=PARENT)
- # online import
+ # Online import
import_task = ImportTask(topo.standalone)
import_task.import_suffix_from_ldif(ldiffile=import_ldif, suffix=DEFAULT_SUFFIX)
- # Check for up to 200sec that the completion
- for i in range(1, 20):
- if len(topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9100.*')) > 0:
- break
- time.sleep(10)
- import_complete = topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9100.*')
- assert (len(import_complete) == 1)
-
+ import_task.wait(timeout=400)
+ assert import_task.get_exit_code() == 0
# Restart server
topo.standalone.restart()
@@ -769,76 +827,6 @@ def test_not_such_entry_role_rewrite(topo, request):
request.addfinalizer(fin)
-def test_rewriter_with_invalid_filter(topo, request):
- """Test that server does not crash when having
- invalid filter in filtered role
-
- :id: 5013b0b2-0af6-11f0-8684-482ae39447e5
- :setup: standalone server
- :steps:
- 1. Setup filtered role with good filter
- 2. Setup nsrole rewriter
- 3. Restart the server
- 4. Search for entries
- 5. Setup filtered role with bad filter
- 6. Search for entries
- :expectedresults:
- 1. Operation should succeed
- 2. Operation should succeed
- 3. Operation should succeed
- 4. Operation should succeed
- 5. Operation should succeed
- 6. Operation should succeed
- """
- inst = topo.standalone
- entries = []
-
- def fin():
- inst.start()
- for entry in entries:
- entry.delete()
- request.addfinalizer(fin)
-
- # Setup filtered role
- roles = FilteredRoles(inst, f'ou=people,{DEFAULT_SUFFIX}')
- filter_ko = '(&((objectClass=top)(objectClass=nsPerson))'
- filter_ok = '(&(objectClass=top)(objectClass=nsPerson))'
- role_properties = {
- 'cn': 'TestFilteredRole',
- 'nsRoleFilter': filter_ok,
- 'description': 'Test good filter',
- }
- role = roles.create(properties=role_properties)
- entries.append(role)
-
- # Setup nsrole rewriter
- rewriters = Rewriters(inst)
- rewriter_properties = {
- "cn": "nsrole",
- "nsslapd-libpath": 'libroles-plugin',
- "nsslapd-filterrewriter": 'role_nsRole_filter_rewriter',
- }
- rewriter = rewriters.ensure_state(properties=rewriter_properties)
- entries.append(rewriter)
-
- # Restart thge instance
- inst.restart()
-
- # Search for entries
- entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
-
- # Set bad filter
- role_properties = {
- 'cn': 'TestFilteredRole',
- 'nsRoleFilter': filter_ko,
- 'description': 'Test bad filter',
- }
- role.ensure_state(properties=role_properties)
-
- # Search for entries
- entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
-
-
if __name__ == "__main__":
CURRENT_FILE = os.path.realpath(__file__)
pytest.main("-s -v %s" % CURRENT_FILE)
diff --git a/ldap/servers/plugins/roles/roles_cache.c b/ldap/servers/plugins/roles/roles_cache.c
index 3e1c5b429..05cabc3a3 100644
--- a/ldap/servers/plugins/roles/roles_cache.c
+++ b/ldap/servers/plugins/roles/roles_cache.c
@@ -1117,16 +1117,17 @@ roles_cache_create_object_from_entry(Slapi_Entry *role_entry, role_object **resu
rolescopeDN = slapi_entry_attr_get_charptr(role_entry, ROLE_SCOPE_DN);
if (rolescopeDN) {
- Slapi_DN *rolescopeSDN;
- Slapi_DN *top_rolescopeSDN, *top_this_roleSDN;
+ Slapi_DN *rolescopeSDN = NULL;
+ Slapi_DN *top_rolescopeSDN = NULL;
+ Slapi_DN *top_this_roleSDN = NULL;
/* Before accepting to use this scope, first check if it belongs to the same suffix */
rolescopeSDN = slapi_sdn_new_dn_byref(rolescopeDN);
- if ((strlen((char *)slapi_sdn_get_ndn(rolescopeSDN)) > 0) &&
+ if (rolescopeSDN && (strlen((char *)slapi_sdn_get_ndn(rolescopeSDN)) > 0) &&
(slapi_dn_syntax_check(NULL, (char *)slapi_sdn_get_ndn(rolescopeSDN), 1) == 0)) {
top_rolescopeSDN = roles_cache_get_top_suffix(rolescopeSDN);
top_this_roleSDN = roles_cache_get_top_suffix(this_role->dn);
- if (slapi_sdn_compare(top_rolescopeSDN, top_this_roleSDN) == 0) {
+ if (top_rolescopeSDN && top_this_roleSDN && slapi_sdn_compare(top_rolescopeSDN, top_this_roleSDN) == 0) {
/* rolescopeDN belongs to the same suffix as the role, we can use this scope */
this_role->rolescopedn = rolescopeSDN;
} else {
@@ -1148,6 +1149,7 @@ roles_cache_create_object_from_entry(Slapi_Entry *role_entry, role_object **resu
rolescopeDN);
slapi_sdn_free(&rolescopeSDN);
}
+ slapi_ch_free_string(&rolescopeDN);
}
/* Depending upon role type, pull out the remaining information we need */
--
2.49.0

View file

@ -1,64 +0,0 @@
From dbaf0ccfb54be40e2854e3979bb4460e26851b5a Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 28 Jul 2025 13:16:10 +0200
Subject: [PATCH] Issue 6901 - Update changelog trimming logging - fix tests
Description:
Update changelog_trimming_test for the new error message.
Fixes: https://github.com/389ds/389-ds-base/issues/6901
Reviewed by: @progier389, @aadhikar (Thanks!)
---
.../suites/replication/changelog_trimming_test.py | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/dirsrvtests/tests/suites/replication/changelog_trimming_test.py b/dirsrvtests/tests/suites/replication/changelog_trimming_test.py
index 2d70d328e..27d19e8fd 100644
--- a/dirsrvtests/tests/suites/replication/changelog_trimming_test.py
+++ b/dirsrvtests/tests/suites/replication/changelog_trimming_test.py
@@ -110,7 +110,7 @@ def test_max_age(topo, setup_max_age):
do_mods(supplier, 10)
time.sleep(1) # Trimming should not have occurred
- if supplier.searchErrorsLog("Trimmed") is True:
+ if supplier.searchErrorsLog("trimmed") is True:
log.fatal('Trimming event unexpectedly occurred')
assert False
@@ -120,12 +120,12 @@ def test_max_age(topo, setup_max_age):
cl.set_trim_interval('5')
time.sleep(3) # Trimming should not have occurred
- if supplier.searchErrorsLog("Trimmed") is True:
+ if supplier.searchErrorsLog("trimmed") is True:
log.fatal('Trimming event unexpectedly occurred')
assert False
time.sleep(3) # Trimming should have occurred
- if supplier.searchErrorsLog("Trimmed") is False:
+ if supplier.searchErrorsLog("trimmed") is False:
log.fatal('Trimming event did not occur')
assert False
@@ -159,7 +159,7 @@ def test_max_entries(topo, setup_max_entries):
do_mods(supplier, 10)
time.sleep(1) # Trimming should have occurred
- if supplier.searchErrorsLog("Trimmed") is True:
+ if supplier.searchErrorsLog("trimmed") is True:
log.fatal('Trimming event unexpectedly occurred')
assert False
@@ -169,7 +169,7 @@ def test_max_entries(topo, setup_max_entries):
cl.set_trim_interval('5')
time.sleep(6) # Trimming should have occurred
- if supplier.searchErrorsLog("Trimmed") is False:
+ if supplier.searchErrorsLog("trimmed") is False:
log.fatal('Trimming event did not occur')
assert False
--
2.49.0

View file

@ -1,32 +0,0 @@
From b34cec9c719c6dcb5f3ff24b9fd9e20eb233eadf Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 28 Jul 2025 13:18:26 +0200
Subject: [PATCH] Issue 6181 - RFE - Allow system to manage uid/gid at startup
Description:
Expand CapabilityBoundingSet to include CAP_FOWNER
Relates: https://github.com/389ds/389-ds-base/issues/6181
Relates: https://github.com/389ds/389-ds-base/issues/6906
Reviewed by: @progier389 (Thanks!)
---
wrappers/systemd.template.service.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/wrappers/systemd.template.service.in b/wrappers/systemd.template.service.in
index ada608c86..8d2b96c7e 100644
--- a/wrappers/systemd.template.service.in
+++ b/wrappers/systemd.template.service.in
@@ -29,7 +29,7 @@ MemoryAccounting=yes
# Allow non-root instances to bind to low ports.
AmbientCapabilities=CAP_NET_BIND_SERVICE
-CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_DAC_OVERRIDE CAP_CHOWN
+CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_DAC_OVERRIDE CAP_CHOWN CAP_FOWNER
PrivateTmp=on
# https://en.opensuse.org/openSUSE:Security_Features#Systemd_hardening_effort
--
2.49.0

View file

@ -1,31 +0,0 @@
From 403077fd337a6221e95f704b4fcd70fe09d1d7e3 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Tue, 29 Jul 2025 08:00:00 +0200
Subject: [PATCH] Issue 6468 - CLI - Fix default error log level
Description:
Default error log level is 16384
Relates: https://github.com/389ds/389-ds-base/issues/6468
Reviewed by: @droideck (Thanks!)
---
src/lib389/lib389/cli_conf/logging.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/lib389/lib389/cli_conf/logging.py b/src/lib389/lib389/cli_conf/logging.py
index 124556f1f..d9ae1ab16 100644
--- a/src/lib389/lib389/cli_conf/logging.py
+++ b/src/lib389/lib389/cli_conf/logging.py
@@ -44,7 +44,7 @@ ERROR_LEVELS = {
+ "methods used for a SASL bind"
},
"default": {
- "level": 6384,
+ "level": 16384,
"desc": "Default logging level"
},
"filter": {
--
2.49.0

View file

@ -1,97 +0,0 @@
From ec7c5a58c7decf94ba5011656c68597778f6059c Mon Sep 17 00:00:00 2001
From: James Chapman <jachapma@redhat.com>
Date: Fri, 1 Aug 2025 13:27:02 +0100
Subject: [PATCH] Issue 6768 - ns-slapd crashes when a referral is added
(#6780)
Bug description: When a paged result search is successfully run on a referred
suffix, we retrieve the search result set from the pblock and try to release
it. In this case the search result set is NULL, which triggers a SEGV during
the release.
Fix description: If the search result code is LDAP_REFERRAL, skip deletion of
the search result set. Added test case.
Fixes: https://github.com/389ds/389-ds-base/issues/6768
Reviewed by: @tbordaz, @progier389 (Thank you)
---
.../paged_results/paged_results_test.py | 46 +++++++++++++++++++
ldap/servers/slapd/opshared.c | 4 +-
2 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/dirsrvtests/tests/suites/paged_results/paged_results_test.py b/dirsrvtests/tests/suites/paged_results/paged_results_test.py
index fca48db0f..1bb94b53a 100644
--- a/dirsrvtests/tests/suites/paged_results/paged_results_test.py
+++ b/dirsrvtests/tests/suites/paged_results/paged_results_test.py
@@ -1271,6 +1271,52 @@ def test_search_stress_abandon(create_40k_users, create_user):
paged_search(conn, create_40k_users.suffix, [req_ctrl], search_flt, searchreq_attrlist, abandon_rate=abandon_rate)
+def test_search_referral(topology_st):
+ """Test a paged search on a referred suffix doesnt crash the server.
+
+ :id: c788bdbf-965b-4f12-ac24-d4d695e2cce2
+
+ :setup: Standalone instance
+
+ :steps:
+ 1. Configure a default referral.
+ 2. Create a paged result search control.
+ 3. Paged result search on referral suffix (doesnt exist on the instance, triggering a referral).
+ 4. Check the server is still running.
+ 5. Remove referral.
+
+ :expectedresults:
+ 1. Referral sucessfully set.
+ 2. Control created.
+ 3. Search returns ldap.REFERRAL (10).
+ 4. Server still running.
+ 5. Referral removed.
+ """
+
+ page_size = 5
+ SEARCH_SUFFIX = "dc=referme,dc=com"
+ REFERRAL = "ldap://localhost.localdomain:389/o%3dnetscaperoot"
+
+ log.info('Configuring referral')
+ topology_st.standalone.config.set('nsslapd-referral', REFERRAL)
+ referral = topology_st.standalone.config.get_attr_val_utf8('nsslapd-referral')
+ assert (referral == REFERRAL)
+
+ log.info('Create paged result search control')
+ req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='')
+
+ log.info('Perform a paged result search on referred suffix, no chase')
+ with pytest.raises(ldap.REFERRAL):
+ topology_st.standalone.search_ext_s(SEARCH_SUFFIX, ldap.SCOPE_SUBTREE, serverctrls=[req_ctrl])
+
+ log.info('Confirm instance is still running')
+ assert (topology_st.standalone.status())
+
+ log.info('Remove referral')
+ topology_st.standalone.config.remove_all('nsslapd-referral')
+ referral = topology_st.standalone.config.get_attr_val_utf8('nsslapd-referral')
+ assert (referral == None)
+
if __name__ == '__main__':
# Run isolated
# -s for DEBUG mode
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index 545518748..a5cddfd23 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -910,7 +910,9 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
/* Free the results if not "no_such_object" */
void *sr = NULL;
slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET, &sr);
- be->be_search_results_release(&sr);
+ if (be->be_search_results_release != NULL) {
+ be->be_search_results_release(&sr);
+ }
}
pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx);
rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, 1);
--
2.49.0

View file

@ -11,7 +11,7 @@ ExcludeArch: i686
%endif
%bcond bundle_libdb 0
%if 0%{?rhel} == 10
%if 0%{?rhel} >= 10
%bcond bundle_libdb 1
%endif
@ -75,7 +75,7 @@ ExcludeArch: i686
Summary: 389 Directory Server (%{variant})
Name: 389-ds-base
Version: 3.1.3
Version: 3.1.4
Release: %{autorelease -n %{?with_asan:-e asan}}%{?dist}
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0 AND Zlib
URL: https://www.port389.org
@ -220,7 +220,7 @@ Provides: bundled(npm(argparse)) = 2.0.1
Provides: bundled(npm(attr-accept)) = 2.2.4
Provides: bundled(npm(autolinker)) = 3.16.2
Provides: bundled(npm(balanced-match)) = 1.0.2
Provides: bundled(npm(brace-expansion)) = 1.1.11
Provides: bundled(npm(brace-expansion)) = 1.1.12
Provides: bundled(npm(callsites)) = 3.1.0
Provides: bundled(npm(chalk)) = 4.1.2
Provides: bundled(npm(color-convert)) = 2.0.1
@ -289,7 +289,7 @@ Provides: bundled(npm(isexe)) = 2.0.0
Provides: bundled(npm(js-sha1)) = 0.7.0
Provides: bundled(npm(js-sha256)) = 0.11.0
Provides: bundled(npm(js-tokens)) = 4.0.0
Provides: bundled(npm(js-yaml)) = 4.1.0
Provides: bundled(npm(js-yaml)) = 4.1.1
Provides: bundled(npm(json-buffer)) = 3.0.1
Provides: bundled(npm(json-schema-traverse)) = 0.4.1
Provides: bundled(npm(json-stable-stringify-without-jsonify)) = 1.0.1
@ -500,7 +500,7 @@ Requires: python3-file-magic
# Picks up our systemd deps.
%{?systemd_requires}
Source0: %{name}-%{version}.tar.bz2
Source0: https://github.com/389ds/%{name}/releases/download/%{name}-%{version}/%{name}-%{version}.tar.bz2
Source2: %{name}-devel.README
%if %{with bundle_jemalloc}
Source3: https://github.com/jemalloc/%{jemalloc_name}/releases/download/%{jemalloc_ver}/%{jemalloc_name}-%{jemalloc_ver}.tar.bz2
@ -510,35 +510,15 @@ Source4: 389-ds-base.sysusers
Source5: https://fedorapeople.org/groups/389ds/libdb-5.3.28-59.tar.bz2
%endif
Patch: 0001-Issue-6782-Improve-paged-result-locking.patch
Patch: 0002-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch
Patch: 0003-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch
Patch: 0004-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch
Patch: 0005-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch
Patch: 0006-Issue-6854-Refactor-for-improved-data-management-685.patch
Patch: 0007-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch
Patch: 0008-Issue-6848-AddressSanitizer-leak-in-do_search.patch
Patch: 0009-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch
Patch: 0010-Issue-6868-UI-schema-attribute-table-expansion-break.patch
Patch: 0011-Issue-6859-str2filter-is-not-fully-applying-matching.patch
Patch: 0012-Issue-6872-compressed-log-rotation-creates-files-wit.patch
Patch: 0013-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch
Patch: 0014-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch
Patch: 0015-Issue-6893-Log-user-that-is-updated-during-password-.patch
Patch: 0016-Issue-6901-Update-changelog-trimming-logging.patch
Patch: 0017-Issue-6430-implement-read-only-bdb-6431.patch
Patch: 0018-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch
Patch: 0019-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch
Patch: 0020-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch
Patch: 0021-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch
Patch: 0022-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch
Patch: 0023-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch
Patch: 0024-Issue-6468-CLI-Fix-default-error-log-level.patch
Patch: 0025-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch
# For ELN
Patch: 0001-Issue-5120-Fix-compilation-error.patch
Patch: 0001-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch
Patch: 0001-Issue-7150-Compressed-access-log-rotations-skipped-a.patch
Patch: 0002-Sync-lib389-version-to-3.1.4-7161.patch
Patch: 0003-Issue-7166-db_config_set-asserts-because-of-dynamic-.patch
Patch: 0004-Issue-7160-Add-lib389-version-sync-check-to-configur.patch
Patch: 0005-Issue-7096-During-replication-online-total-init-the-.patch
Patch: 0006-Issue-Revise-paged-result-search-locking.patch
Patch: 0007-Issue-7108-Fix-shutdown-crash-in-entry-cache-destruc.patch
Patch: 0008-Issue-7172-Index-ordering-mismatch-after-upgrade-717.patch
Patch: 0009-Issue-7172-2nd-Index-ordering-mismatch-after-upgrade.patch
%description
389 Directory Server is an LDAPv3 compliant server. The base package includes
@ -552,7 +532,7 @@ Please see http://seclists.org/oss-sec/2016/q1/363 for more information.
%if %{with libbdb_ro}
%package robdb-libs
Summary: Read-only Berkeley Database Library
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0 AND Zlib
License: GPL-2.0-or-later OR LGPL-2.1-or-later
%description robdb-libs
The %{name}-robdb-lib package contains a library derived from rpm

View file

@ -1,3 +1,3 @@
SHA512 (jemalloc-5.3.0.tar.bz2) = 22907bb052096e2caffb6e4e23548aecc5cc9283dce476896a2b1127eee64170e3562fa2e7db9571298814a7a2c7df6e8d1fbe152bd3f3b0c1abec22a2de34b1
SHA512 (389-ds-base-3.1.3.tar.bz2) = bd15c29dba5209ed828a2534e51fd000fdd5d32862fd07ea73339e73489b3c79f1991c91592c75dbb67384c696a03c82378f156bbea594e2e17421c95ca4c6be
SHA512 (libdb-5.3.28-59.tar.bz2) = 731a434fa2e6487ebb05c458b0437456eb9f7991284beb08cb3e21931e23bdeddddbc95bfabe3a2f9f029fe69cd33a2d4f0f5ce6a9811e9c3b940cb6fde4bf79
SHA512 (389-ds-base-3.1.4.tar.bz2) = 17de77a02c848dbb8d364e7bab529726b4c32e466f47d5c2a5bba8d8b55e2a56e2b743a2efa4f820c935b39f770a621146a42443e4f171f8b14c68968155ee2c