Abstract: In this paper, we propose a novel classification method that utilizes syntax trees and perplexity to identify jailbreak attacks that use hostile suffixes to make large language models (LLMs) ...
As large language models (LLMs) continue to improve at writing code, a key challenge has emerged: enabling them to generate complex, high-quality training data that actually reflects real-world ...
Abstract: Autoencoder models of source code are an emerging alternative to autoregressive large language models with important benefits for genetic improvement of software. We hypothesize that encoder ...
What is a Merkle tree? A Merkle tree – also called a Merkle Tree or hash tree – is a logical structure that organizes data in a distributed system while ensuring their integrity and consistency. This ...
Large language models (LLMs) have revolutionized code generation, but their autoregressive nature poses a significant challenge. These models generate code token by token, without access to the ...
1 Faculty of English Language and Culture, Guangdong University of Foreign Studies, Guangzhou, China. 2 School of Foreign Studies, Xiangnan University, Chenzhou, China. 3 School of Foreign Studies, ...
Romera-Paredes and colleagues’ work is the latest step in a long line of research that attempts to create programs automatically by taking inspiration from biological evolution, a field called genetic ...
LLMs have had a significant impact in the fields of code generation and comprehension. These models, trained on extensive code datasets such as GitHub, excel in tasks like text-to-code conversion, ...
One of the big questions we raise in comparative psychology is about the main difference between humans and animals. There is an easy short answer to what sets us apart from the rest of the animals: ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results