June 21 ~ 22, 2025, Sydney, Australia
Omar Hujran, Nuseiba Altarawneh, Statistics and Business Analytics United Arab Emirates University, Al-Ain, UAE
Until recently, scholarly focus on the post-adoption phase, specifically users continued usage of chatbot services, has been notably limited. Recognizing this gap in the existing literature, the current research endeavors to develop an integrative model. This model extends the theoretical framework of the Expectation Confirmation Model (ECM) by incorporating key constructs related to chatbot service quality and anthropomorphism. Both of these concepts are deemed crucial in influencing the continuance usage of chatbot services. As an integral facet of a broader research undertaking, the model will be empirically applied to assess citizens intentions regarding the continued use of chatbot services within context of the United Arab Emirates.
Chatbots, post-adoption, ECM, anthropomorphism, chatbot service quality.
Photis Nanopoulos, National Technical University of Athens, GreeceRetired EUROSTAT, Greece
In this paper we propose the definition of a divergence coefficient between two random variables (Y,X) which provides a general theoretical framework for several classical coefficients used in supervised machine learning algorithms. We use the well-known properties of conditional expectation between two random variables to define a coefficient φ(Y/X) of Y versus X from which we derive as particular cases the classical ID3 ,C4.5 ,CART, alongside other coefficients, proving also the optimality of the solutions they may provide. We also notice the well-known fact that the Bregman divergence is a more general expression of the L2 norm.
Classification methods, optimal partitions, decision trees, Gini coefficient, entropy, Bregman divergences.
Praveen Pilla1 and Gudeangadi Vinaya2, 1Physical Design Engineer, Intel Technology India Pvt. Ltd, Bangalore, 2Soc Design Manager, Intel Technology India Pvt. Ltd, Bangalore
At the SOC level, channels are used to interconnect different IPs together. When the number of wires going through the channels are too many the area of the channels becomes substantial portion of the total area of the SoC. The problem with channels is, they are only carrying the signals. The channels are inefficient way of utilizing the area since channels carry mostly signals with very less utilization. Often to optimize the area, IPs are abutted at the SoC level & channels are replaced by SoC feedthrough wires going across the IPs. While implementing SoCs where IPs are abutted, it is a common practice to run the SoC feedthrough wires across the IPs to reduce the area spent in Channels. In an abutted design methodology, the two IPs connected to each other may not be next to each other due to floorplan constraints. This is the reason SoC feedthrough wires are run across the intermediate IPs. The new age design planning tools can handle creating SoC feedthroughs at IP level. But incase of Multiple Instantiated Blocks (MIBs i.e. multiple instances of the same reference) the design planning tools cannot handle them efficiently & effectively. This paper discusses different scenarios where the tool cannot handle SoC feedthrough creation & proposes solutions to resolve them.
MIB – Multiple Instantiated blocks, SoC: System on Chip, IP: Intellectual Property.
Vinoodhini D1, Ajai Ram2 and Arockia Xavier Annie R3,1, 2 & 3 Anna University, Chennai - 600025, India
Audio verification is a key biometric authentication method used to confirm an individuals identity based on their voice. By tackling issues like fluctuating acoustic conditions, and a range of voice characteristics, this research aims to enhance speaker verification systems. Existing approaches are ineffective in practical situations as security requirements for various applications increase, calling for more reliable solutions. To improve voice feature extraction and analysis, the study makes use of the DF-ResNet architecture, which combines a transformation module with a depth-first search strategy. Speaker verification datasets from actual acoustic settings are used to assess the model. Experimental results demonstrate its effectiveness in improving accuracy while maintaining low computational complexity, making it a viable solution for modern biometric authentication systems.
Biometric Authentication, Depth First Resnet, Transformation Module, Speaker verification.
Ascanio Bernardeschi, Centro Studi Domenico Losurdo, Università Popolare Antonio Gramsci, Roma, Italy
This paper explores the socio-economic implications of Artificial Intelligence, challenging the notion oftechnological neutrality. It critically engages with concepts such as "cognitive capitalism" and thealleged obsolescence of themarxian labor theory of value. AI is shown to intensify labor exploitation byde-skilling, automating intellectual labor, and extending surveillance across global supply chains. WhileAI has the potential to relieve humans from certain cognitive burdens and foster progress in sectors likemedicine or science, its current use under capitalism reproduces class-based inequalities. Is stressed theneed to politically contest capitalist appropriation of AI and reorient it toward collective emancipation.The paper concludes by calling for shorter working hours, transparent and open-source AI development,and the formation of unified political platforms that empower a fragmented working class. The rise of AIshould not lead to techno-pessimism or neo-Luddism, but to a renewed class struggle adapted to thedigital age.
Economics, Finance, Marxian Theory, Open source, Generative AI.
Sulakna Weerasinghe, Ovindu Gunatunga, Warthula Dewpura, Sanduni Fernando, Dharshana Kasthurirathna and Samadhi Rathnayake, Sri Lanka Institute of Information Technology, Sri Lanka
Multimodal retrieval systems have gained significant attention due to their ability to process and cross-retrieve data containing images and text. However, the factors such as high cost of development, limitation on resources, and the proper addressing of the modality gap, the inherent representational differences between modalities pose a challenge to building effective and efficient retrieval models. In this work, we propose a low-resource, cost-efficient hybrid multimodal retrieval model that integrates CLIP and All-MiniLM-L6-v2 to create a shared embedding space while storing raw images in an unstructured database. Our primary contributions include (1) the development of a hybrid model that outperforms CLIP-native retrieval, (2) a novel bidirectional neural network alignment technique that brings textual and visual modalities closer together, and (3) a comprehensive analysis of the modality gaps impact on downstream retrieval performance. Through proper evaluation using transparent techniques such as Mean Reciprocal Rank (MRR) and Cosine-Weighted MRR, our method demonstrates improved retrieval accuracy over baseline approaches. Experimental results exhibit that a lower modality gap does not always prove to be efficient on the downstream retrieval. Our findings pave the way for more efficient, adaptable, and cost-effective multimodal retrieval methodologies in low-resource environments, not limited to the education domain.
Multimodal Retrieval, Modality Gap, CLIP, Hybrid model, Low-resource, Low-cost, Neural Network, Education, Domain-Specific Data, Embedding Space, Cross Modal Retrieval.
M.S. Lungu, B. Labudin, Northern Arctic Federal University, name M. IN. Lomonosov". Nab. Northern Dvina. 17, Arkhangelsk, Russia
The Internet of Things (IoT) is a transformative technology fostering global connectivity and innovation across sectors like agriculture, healthcare, transportation, and urban management. In Africa, IoT adoption offers exceptional opportunities to improve service delivery and socio-economic development. However, these advances bring significant security challenges—including inadequate device protection, network vulnerabilities, and a pronounced knowledge gap among stakeholders—that threaten to undermine IoT’s potential benefits. This paper explores the unique IoT security risks within the African context, emphasizing the urgent need for a comprehensive, multi-faceted security approach. By examining successful case studies from Kenya, South Africa, and other countries, it illustrates practical security solutions integrated into real-world IoT implementations. The study advocates for awareness programs, regulatory frameworks, public-private partnerships, and investments in cybersecurity technologies to safeguard IoT ecosystems. Ultimately, it aims to motivate stakeholders to prioritize information security, ensuring the sustainable growth of IoT technologies in Africa.
Information security, Internet of Things (IoT), network vulnerabilities, Internet Corporation for Assigned Names and Numbers(ICANN).
Sai Prasad Veluru1 and Mohan Krishna Manchala2, 1Software Engineer at Apple Inc, USA, 2ML Engineer at Meta, USA
Often requiring quick discovery, troubleshooting & more resolution, managing issues within Kubernetes ecosystems may be difficult and work intensive. With an eventual aim of almost minimal human intervention, this project explores the design of LLM-based copilots meant to optimize & also automate incident handling. Incorporating sophisticated language models (LLMs) with existing technologies like Slack & also Kubernetes will help teams to solve issues more effectively. The three main problems with Kubernetes incident management are the too many logs, complex setups & the need for quick under pressure decision-making. We propose a way wherein LLMs, trained on previous event data, enable problem diagnosis and provide actual time remedies via seamless interaction with Kubernetes and Slack, thereby addressing these challenges. Teams may reduce the time & also expertise required for incident management by using the capabilities of LLMs to acquire preemptive issue notifications, automatic troubleshooting tools & help from within their communication platforms. The results of the study show that our solution greatly improved operational efficiency by more than 80% by reducing these incident resolution times. This method shows how well AI manages incidents, therefore opening the path for further developments in the Kubernetes ecosystem automation. With additional development, this idea may be more relevant to many other cloud-native systems as it offers a more intelligent and scalable method of system administration.
Large Language Models (LLMs), Kubernetes, Incident Management, Copilot Systems, Automation, Slack Integration, DevOps, Cloud-native Environments, Incident Elimination, System Resilience, Real-time Operations, Cloud Management, Self-healing Systems, Anomaly Detection, Incident Resolution, Kubernetes Clusters, Log Analysis, Automation Tools.
Pavan Paidy1 Krishna Chaganti2, 1AppSec Lead At FINRA, USA, 2Associate Director at S & P Global, USA
Strong security becomes very necessary in the fast-paced digital environment of today as businesses are switching to multi-cloud architectures for improved scalability and agility. Ensuring visibility, control, and compliance in dynamic environments such as AWS and Azure now primarily depends on Cloud-native Security Posture Management (CSPM). These systems provide freedom but also major risks: improperly set-up storage, too authorized identities, and unattended services can be readily overlooked. Constant monitoring offered by CSPM helps to find vulnerabilities before they become more critical by means of deviations from security baselines. This approach depends on audits, which enable teams to match present status of affairs with internal compliance requirements. Consistent audit trails assist quick issue resolution and informed decision-making by giving both responsibility and knowledge of system behavior. Along with this security is continuous compliance monitoring, carefully checking systems and instantly spotting changes. Beyond detection, good Cloud Security Posture Management (CSPM) interacts with DevOps pipelines to rapidly address problems and combines automated, prioritized risk mitigating technologies, therefore enhancing security protections early in the development process Cloud Security Posture Management (CSPM) products help to organize the anarchy and match security operations with corporate goals, so preserving the speed of innovation as safeguarding cloud environments gets more complicated becomes more difficult. Not only advised, but companies running both AWS and Azure systems absolutely have to put an audit-driven, policy-enforced Cloud Security Posture Management (CSPM) plan into action.
Cloud-Native Security, CSPM, AWS Security, Azure Compliance, Risk Assessment, Continuous Auditing, Multi-Cloud Strategy, Compliance Automation, DevSecOps, Security Monitoring, Cloud Compliance, Configuration Management, Policy Enforcement, Identity and Access Management, Cloud Risk Mitigation, Security Posture, Automated Remediation, Cloud Governance, Vulnerability Detection, Real-Time Monitoring, Security Best Practices, Regulatory Compliance, Cloud Workloads, Hybrid Cloud, Infrastructure as Code, Cloud Visibility, Security Baseline, Audit Trails, Misconfiguration Detection, Security Frameworks, Azure Governance, AWS Controls, Threat Detection, Compliance Reporting, Continuous Compliance, Risk Prioritization, DevOps Integration, Cloud Controls, Data Protection, Compliance Frameworks, Cloud Audit Tools, Security Automation, Azure Security Centre, AWS Security Hub, Cloud Security Tools, SOC 2 Compliance, HIPAA Cloud Security, NIST Compliance, CIS Benchmarks, Security Policy Enforcement.
Yasodhara Varma Rangineeni, Vice President at JP Morgan & Chase, USA
While check fraud is a major challenge for consumers & also banking institutions, financial security is fundamental to modern economies. Dependent on human validation & rule-based algorithms, conventional fraud detection solutions have proven insufficient ability to counteract the always developing strategies utilized by their fraudsters. This work explores how well deep learning models with photo recognition detect check fraud. The proposed approach detects anomalies in check images, including changing amounts, signatures & also counterfeit marks, independently using advanced neural networks—more especially, convolutional neural networks (CNNs). With an accuracy above conventional methods, DL models can evaluate much information including handwritten text, font variations & also graphical elements. We provide a strategy that combines sophisticated model architectures with photo preprocessing techniques to offer a practical means of real-time fraud detection. Experimental results show the durability of the model, obtaining high accuracy and recall in spotting fraudulent checks, even with few changes. The paper underlines the need of incorporating modern technologies into present financial systems in order to improve their security mechanisms and lower losses. This work emphasizes the groundbreaking ability of deep learning in stopping financial fraud & provides a scalable & more flexible answer to a more complex problem.
Image recognition, deep learning, check fraud, financial security, fraud detection, convolutional neural networks, deposit fraud, neural networks, fraud prevention, automated check verification, financial institutions, machine learning, and pattern recognition.
Sangeeta Anand, Senior Business System Analyst at Continental General, USA, Sumeet Sharma,Senior Project manager at Continental General, USA
Changing demographics, growing healthcare expenses, and better standards for digital service supply have long-term care insurance (LTCI) firms under more and more pressure to modernise. Standard long-term care insurance plans fall short in managing claims, assessing risks, ensuring policy compliance, or providing consumers with decent customer service. Usually, several data storage, human procedures, and outdated technology lead to these issues. These restrictions deliver more specialised, premium treatment, increase running expenses, and complicate response. This essay explores how employing cloud-based data analytics could totally transform how these problems are addressed and offer a fresh approach for long-term care insurance to function. By enabling scalable and flexible configurations that speed up real-time data processing, simplify case management, and increase predictive analytics—all of which help to make case management possible—cloud technologies provide for By moving to cloud-native architectures, LTCI companies might cut operational expenses, improve choices, and provide better, more customer-centric experiences. Two quick modernising strategies are getting ready for a cloud migration and emphasising business stability, security, and cost control. The paper underlines the need of establishing robust data governance systems to guarantee adherence to HIPAA, other regulations, and data quality standards as well as safe handling and preservation of private policyholder data. Interoperability is crucial since it allows several systems to function without any issues. Among the systems within this group are EHRs, nursing networks, statistics databases, and claims handling engines. LTCI must have advanced data analytics abilities—including predictive modelling for risk classification, fraud detection technology, and sentiment analysis for client comments—if it is to effectively modernise. These technologies set the best rates, enable insurance firms to identify high-risk individuals ahead of time, expedite the claims process, and create customised treatment recommendations based on historical performance.
LTCI, Cloud Computing, Data Analytics, Healthcare Insurance, Predictive Modeling, Claims Management, Risk Stratification, Cloud Migration, Data Governance, Interoperability, Regulatory Compliance, Real-Time Processing, Customer Experience, Fraud Detection, Sentiment Analysis, Machine Learning, Legacy Systems Modernization, Electronic Health Records, Actuarial Analytics, Insurance Technology (InsurTech).
Parth Jani, Project manager at Molina Healthcare, USA
Biometric authentication is a necessity in modern security systems since it offers a consistent and simple way of identity confirmation. Many times, unimodal biometric systems—that depend on a single biometric characteristic—such as a fingerprint, iris, or facial identification—run against constraints. These encompass sensitivity to noise during data collecting, vulnerability to spoofing attempts, intra-class differences, and the incapacity to generally apply the quality to diverse populations. Developed to address these issues and increase the efficiency of identity confirming systems are multimodal biometric authentication techniques. Through the advantages of various biometric modalities, these systems provide enhanced accuracy, durability, and general resilience. This work specifies the architecture and implementation of a multimodal biometric identification system by using fusion approaches at various levels: sensor, feature, score, and decision. Every degree of fusion has certain benefits: Raw data from many sources is combined at sensor-level fusion to increase information richness; feature-level fusion combines feature sets derived from many modalities to improve discriminating capability; score-level fusion compiles matching scores to improve flexibility and dependability; and decision-level fusion aggregates final classification decisions to yield a strong authentication result. The system combines fingerprint methods and facial recognition to demonstrate the success of the fusion strategy. We assess each fusion level under consideration for false acceptance rate (FAR), false rejection rate (FRR), and total authentication accuracy using controlled trials. Particularly in contexts defined by noisy or insufficient input, the results reveal that multimodal fusion greatly surpasses unimodal systems. The results illustrate the degree of protection multimodal systems provide against access to private systems and data. The work then addresses future prospects like the integration of new biometric features, adaptive fusion techniques using machine learning, and privacy-preserving systems protecting user information while maintaining best authentication efficacy.
Noise Tolerance in Biometrics, Adaptive Fusion Algorithms, Machine Learning in Biometrics, Biometric System Performance, False Acceptance Rate (FAR), False Rejection Rate (FRR), Usability in Authentication Systems, Biometric Trait Complementarity, Next-Generation Authentication, Robust Identity Management.
Anusha Atluri1 and Teja Puttamsetti2, 1SR Oracle Techno Functional consultant at Oracle, USA, 2Senior Integration Specialist at Caesars Entertainment, USA
One of the main obstacles in the proper integration of Oracle Cloud ERP and HCM systems is their enormous capacity for both scalably and resilience. These two systems which in their individual capacity are quite powerful demand integration strategies of robust nature to ensure that they are collaborating faultlessly as the organizations grow and the services offered are becoming the content of data. On the top position is the challenge of keeping the performance, data consistency, and uptime while managing very complex business processes throughout the two systems at the same time. This article is going to introduce the scalable integration patterns whose aim is not only to deal with these issues but also to maintain the systems resilience. By means of modular integration designs, data synchronization, and automation, the proposed models facilitate the seamless communication between Oracle Cloud ERP and HCM, the systems adjusting to the businesses growing requirements. Resilience comes from a fault-tolerant architecture, explaining that the system can manage errors gracefully and in the worst case, minimize the loss of service. The agencies that carried out the study point out the necessity of cloud-native tools and the use of best practices in making effective and robust integrations. Such tools are gaining in importance in the light of the unexpected dislocation that they could cause but still, service quality should not be compromised. These scalable integration patterns were designed specifically to be flexible so that the organizations could change their operations without any difficulty. Moreover, the article highlights the significance of automated automation in the avoidance of errors and inefficiencies that are mainly connected with manual interventions. Bringing all this into focus, this study is motivated by the continuous improvement of cloud tools and the requirement of organizations to create systems that are not only flexible but also resilient. In real business life, the implementation of these integration patterns would lead to the cloud ERP and HCM systems of the organization being able to be optimized, thus ensuring their success in the long-run run of the business. The next battlefield for cloud integration points to more and more automation, smarter ways of dealing with data, as well as the transformation of the architectures that will support the developing technological features and business environments.
Oracle Cloud ERP, Oracle Cloud HCM, Scalable Integration Solutions, Cloud Integration, ERP Resilience, Enterprise Resource Planning, Human Capital Management.
Parth Jani, Project manager at Molina Healthcare, USA
Biometric authentication is a necessity in modern security systems since it offers a consistent and simple way of identity confirmation. Many times, unimodal biometric systems—that depend on a single biometric characteristic—such as a fingerprint, iris, or facial identification—run against constraints. These encompass sensitivity to noise during data collecting, vulnerability to spoofing attempts, intra-class differences, and the incapacity to generally apply the quality to diverse populations. Developed to address these issues and increase the efficiency of identity confirming systems are multimodal biometric authentication techniques. Through the advantages of various biometric modalities, these systems provide enhanced accuracy, durability, and general resilience. This work specifies the architecture and implementation of a multimodal biometric identification system by using fusion approaches at various levels: sensor, feature, score, and decision. Every degree of fusion has certain benefits: Raw data from many sources is combined at sensor-level fusion to increase information richness; feature-level fusion combines feature sets derived from many modalities to improve discriminating capability; score-level fusion compiles matching scores to improve flexibility and dependability; and decision-level fusion aggregates final classification decisions to yield a strong authentication result. The system combines fingerprint methods and facial recognition to demonstrate the success of the fusion strategy. We assess each fusion level under consideration for false acceptance rate (FAR), false rejection rate (FRR), and total authentication accuracy using controlled trials. Particularly in contexts defined by noisy or insufficient input, the results reveal that multimodal fusion greatly surpasses unimodal systems. The results illustrate the degree of protection multimodal systems provide against access to private systems and data. The work then addresses future prospects like the integration of new biometric features, adaptive fusion techniques using machine learning, and privacy-preserving systems protecting user information while maintaining best authentication efficacy.
Noise Tolerance in Biometrics, Adaptive Fusion Algorithms, Machine Learning in Biometrics, Biometric System Performance, False Acceptance Rate (FAR), False Rejection Rate (FRR), Usability in Authentication Systems, Biometric Trait Complementarity, Next-Generation Authentication, Robust Identity Management.
Anusha Atluri1 and Teja Puttamsetti2, 1SR Oracle Techno Functional consultant at Oracle, USA, 2Senior Integration Specialist at Caesars Entertainment, USA
One of the main obstacles in the proper integration of Oracle Cloud ERP and HCM systems is their enormous capacity for both scalably and resilience. These two systems which in their individual capacity are quite powerful demand integration strategies of robust nature to ensure that they are collaborating faultlessly as the organizations grow and the services offered are becoming the content of data. On the top position is the challenge of keeping the performance, data consistency, and uptime while managing very complex business processes throughout the two systems at the same time. This article is going to introduce the scalable integration patterns whose aim is not only to deal with these issues but also to maintain the systems resilience. By means of modular integration designs, data synchronization, and automation, the proposed models facilitate the seamless communication between Oracle Cloud ERP and HCM, the systems adjusting to the businesses growing requirements. Resilience comes from a fault-tolerant architecture, explaining that the system can manage errors gracefully and in the worst case, minimize the loss of service. The agencies that carried out the study point out the necessity of cloud-native tools and the use of best practices in making effective and robust integrations. Such tools are gaining in importance in the light of the unexpected dislocation that they could cause but still, service quality should not be compromised. These scalable integration patterns were designed specifically to be flexible so that the organizations could change their operations without any difficulty. Moreover, the article highlights the significance of automated automation in the avoidance of errors and inefficiencies that are mainly connected with manual interventions. Bringing all this into focus, this study is motivated by the continuous improvement of cloud tools and the requirement of organizations to create systems that are not only flexible but also resilient. In real business life, the implementation of these integration patterns would lead to the cloud ERP and HCM systems of the organization being able to be optimized, thus ensuring their success in the long-run run of the business. The next battlefield for cloud integration points to more and more automation, smarter ways of dealing with data, as well as the transformation of the architectures that will support the developing technological features and business environments.
Oracle Cloud ERP, Oracle Cloud HCM, Scalable Integration Solutions, Cloud Integration, ERP Resilience, Enterprise Resource Planning, Human Capital Management.
Vasanta Kumar Tarra, Lead engineer at Guidewire software, USA
The financial services sector runs inside one of the most regulated settings on Earth. Following continually changing legal rules, institutions must negotiate a complicated network of customer contacts, transactions, and sensitive data. As customer needs for speed, openness, and customised service increase, companies need thorough case management solutions to appropriately and securely handle client complaints, service requests, and compliance-related obligations. Compliance is a fundamental component of trust and a must for the financial services industry business continuity; it is not merely a procedural formality. Ignoring data privacy laws including GDPR, anti-money laundering (AML) rules, and know-your-customer (KYC) standards could result in large financial penalties and damage reputation. Including compliance into every level of case handling systems is thus not optional but rather required. Developing compliance case management systems has a good foundation from Salesforces Financial Services Cloud (FSC). Designed especially to meet the special requirements of financial institutions, FSC provides pre-configured data models, security systems, and industry-standard-compliant processes. It maximises case monitoring, document management, client correspondence, and audit trails so allowing companies to satisfy legal criteria with remarkable excellence. This project proposes FSC-based compatible case management system design and application approach. Our method ensures real-time auditability, automates compliance processes, tailors the FSC data model to meet certain regulatory criteria, and combines safe communication channels. Using setup, modification, and integration with outside compliance solutions, we show how businesses could not only meet but significantly beyond their needs for compliance. Among the key outcomes are faster settling times, less regulatory risk, and increased client happiness. Future case management in financial services will most likely rely heavily on artificial intelligence, automation, and real-time analytics as consumers expect ever more customised and safe services and laws are more rigors. The Financial Services Cloud supports businesses to adapt to changes and maintain a competitive advantage by means of its expanding ecosystem and constant innovation.
Financial Services Cloud (FSC), Compliance, Case Management System, Salesforce, Regulatory Compliance, Financial Services, Data Privacy, Automation, CRM, Risk Management
Varun Varma Sangaraju, Senior QA Engineer at Cognizant, USA
In the current digital world, releasing the software of the highest quality in the shortest possible time has become a matter of great importance for maintaining competitiveness in businesses. The catch, however, is that the majority of test automation strategies today are not sufficient enough as they heavily depend on separated scripts and incomplete tooling that, apart from causing troubles in cooperation, also slow down the speed of delivery. This article is drafted in response to a large gap in the study by offering a one-stop solution, namely, an enterprise-grade test automation framework capable of tight integration of Pythons ease, Seleniums omnipresence, and Cucumbers BDD (behavior-driven development) capabilities. We resorted to design science methods and applied them in the context of a multi-project case based on the digital transformation program of a leading company listed in the Fortune-500. As an addition to this, the effectiveness of our new modular framework was empirically tested against the previous legacy code by running several projects. The results are more than just impressive: a 55% reduction in maintenance effort, a 37% acceleration of the release cycles, and a better understanding of the user-space. Our study suggests a concrete way for companies who want to redesign their test automation process in order to get big ROI and in the long run to get bigger revenues. By using our framework, firms would be able to make the most of the automation of testings, and they would speed up the process of developing software and thus achieve new milestones in terms of quality and innovation. The modular structure of the framework lays the foundation for a smooth process of integrating it into the already existing development pipelines, thus enabling the teams to work in the shortest and the most productive manner. This research, in fact, not only opens the door for further research and successful implementation but also provides a standardized method for combining strategy, tooling, and culture resulting in the future of software testing and delivery. Our paper brings many benefits to scholars, experts, and leaders alike as it supplies them with an accredited test automation transformation plan, which can be incorporated into all stages and sectors and is flexible to meet specific organizational needs. The outcomes and insights of the study together make a solid base to propose a plan that helps in taking essential steps in the right direction towards the digital of an organization as well as the improvement of the general performance of the business.
Enterprise Test Automation, Python, Selenium, Cucumber, Behavior-Driven Development, Continuous Integration, Continuous Delivery, DevOps, Quality Assurance, Software Testing, Test Automation Framework, Agile Testing, Digital Transformation, Software Quality, Automation Testing
Krishna Chaitanya Chaganti,Associate Director At S&p GLOBAL,USA
The present fast software development environment defines both the assurance of strong, safe products and the reduction of risk by smoothly integrating security into DevOps procedures. Using SAST and DAST, including security into DevOps provides a useful guide for DevSecOps covering holistically and pragmatically security during the lifetime of software development. Emphasizing the basic concepts of DevSecOps, this book tries to build a culture whereby security is shared responsibility across operations, developers, and security teams. It highlights as basic components both at the source code level and at the proactive vulnerability discovery at runtime. Dynamic and static application security tests (DAST and SAST respectively) come first on front stage. Using DAST tools in dynamic environments, the review addresses efficient methods to bring SAST into continuous integration systems to disclose vulnerabilities before they are used and replicate real-world attack situations. It looks at best practices in tool choice, automation, distributing security testing around large development teams, and lowering false positives to maintain pace with development without sacrificing security. Emphasizing common issues and sensible solutions, case studies and examples show how fast companies have embraced DevSecOps techniques. The manual covers the design of scalable frameworks fit for technological advancements, the encouragement of cooperation between engineering and security teams, and the shift from a reactive to a proactive attitude. Under close examination also are policies for reporting, issues with regulatory compliance, and techniques of using metrics to keep security posture constantly enhanced. This useful book gives tools and knowledge required for the effective integration of SAST and DAST into DevOps processes, therefore enabling developers, security experts, and IT managers to build a strong software delivery environment. Following this tutorial will enable readers to be totally aware of how to apply DevSecOps concepts and increase the capacity of their company to fast generate secure, high-quality software. Not only technological solutions but also the fundamental human factors for successful DevSecOps adoption depend on fostering trust across teams, permitting honest communication, and building a culture that sees security as an accelerator of innovation rather than a barrier. It underlines the need of leadership endorsement as well as the expansion of a security-centric culture free from limitations of creativity or adaptation. Analyzed are pragmatic approaches for organizing security seminars, setting acceptable goals, and endorsing safe coding standards to facilitate a better transfer. Every company is different so the book provides flexible models fit for different team sizes, industries, and technology platforms. Whether you are beginning your DevOps path or enhancing an existing pipeline, this book is like a companion helping you to fearlessly and accurately manage the evolving security environment. Combining people, systems, and processes helps you to improve your potential to generate secure software while keeping speed and motivating creativity.
DevSecOps, DevOps, SAST, DAST, CI/CD, Application Security, Software Development, Lifecycle, Security Automation, Vulnerability Scanning, Secure Code.