Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [task] created task=0x7ff2f183b560 id=20 OK Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=655 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=1085 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] re-using retry for task_id=16 attempts=2 elasticsearch - failed to flush the buffer fluentd - Stack Overflow "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [out coro] cb_destroy coro_id=17 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_eMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 16 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0) N must be >= 1 (default: 1) When Retry_Limit is set to no_limits or False, means that there is not limit for the number of retries that the Scheduler can do. It's possible for the HTTP status to be zero because it's unparseable -- specifically, the source uses atoi () -- but flb_http_do () will still return successfully. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header Fluentd error: "buffer space has too many data" - Stack Overflow [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688 [2022/03/24 04:19:52] [debug] [outputes.0] task_id=0 assigned to thread #0 [2022/03/24 04:20:51] [debug] [out coro] cb_destroy coro_id=6 Fluentbit is stuck, flb chunks are not flushed and not sending - Github Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] task_id=10 assigned to thread #0 [2022/03/24 04:20:06] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 11 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) In this step, I have 5 fluentd pods and 2 of them were OOMkilled and restart several times. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1931990 watch_fd=19 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [task] created task=0x7ff2f183ac00 id=15 OK Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:22] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Trace logging is enabled but there is no log entry to help me further. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 events: IN_ATTRIB Hi @yangtian9999 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:25] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) * ra: fix typo of comment Signed-off-by: Takahiro YAMASHITA <nokute78@gmail.com> * build: add an option for OSS-Fuzz builds (fluent#2502) This will make things a lot easier from the OSS-Fuzz side and also make it easier to construct new fuzzers. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [http_client] not using http_proxy for header Logs not being flushed after x amount of time. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"l-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:43] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"keMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. {"took":1923,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:49] [error] [outputes.0] could not pack/validate JSON response Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:00] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Follow. * Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [retry] new retry created for task_id=7 attempts=1 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"fuMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [ warn] [engine] failed to flush chunk '1-1648192101.677940929.flb', retry in 9 seconds: task_id=4, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [outputes.0] HTTP Status=200 URI=/_bulk How can I debug why Fluentd is not sending data to Elasticsearch? Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=681 I'm trying to configure Loki to use Apache Cassandra both for index and chunk storage. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=6 attempts=2 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=681 Failed to create target, ioutil.ReadDir: readdirent: not a directory. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/05/26 06:04:45] [ info] [engine] flush chunk '1-1653545056.585580723.flb' succeeded at retry 1: task_id=90, input=tail.0 > output=forward.0 (out_id=0) [2022/05/26 06:04:46] [ info] [engine] flush chunk '1-1653545061.402631314.flb' succeeded at retry 1: task_id=102, input=tail.0 > output=forward.0 (out_id=0) [2022/05/26 06:04:46] [ info] [engine] flush chunk '1-1653545059.352915125.flb . Monitoring - Fluent Bit: Official Manual If that doesn't help answer your questions, you can connect to the Promtail pod to . Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=650 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. I am trying a simple fluentbit / fluentd test with ipv6, but it is not working. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [ warn] [engine] failed to flush chunk '1-1648192109.839317289.flb', retry in 16 seconds: task_id=8, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:20:51] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 111 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"f-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Falied to flush the buffer, and the file buffer directory is filled Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records [2022/03/24 04:20:34] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] HTTP Status=200 URI=/_bulk