Implied weighting

http://dbpedia.org/resource/Implied_weighting

Implied weighting describes a group of methods used in phylogenetic analysis to assign the greatest importance to characters that are most likely to be homologous. These are a posteriori methods, which include also dynamic weighting, as opposed to a priori methods, which include adaptive, independent, and chemical categories (see Weighting at the American Museum of Natural History's website). Goloboff recognizes that trees with the heaviest average weights give the most 'respect' to the data: a low average weight implies that most characters are being 'ignored' by the tree-building algorithms. rdf:langString
rdf:langString Implied weighting
xsd:integer 33485748
xsd:integer 1064923864
rdf:langString Implied weighting describes a group of methods used in phylogenetic analysis to assign the greatest importance to characters that are most likely to be homologous. These are a posteriori methods, which include also dynamic weighting, as opposed to a priori methods, which include adaptive, independent, and chemical categories (see Weighting at the American Museum of Natural History's website). The first attempt to implement such a technique was by Farris (1969), which he called successive approximations weighting, whereby a tree was constructed with equal weights, and characters that appeared as homoplasies on this tree were downweighted based on the CI (consistency index) or RCI (rescaled consistency index), which are measures of homology. The analysis was repeated with these new weights, and characters were again re-weighted; subsequent iteration was continued until a stable state was reached. Farris suggested that each character could be considered independently with respect to a weight implied by frequency of change. However, the final tree depended strongly on the starting weights and the finishing criteria. The most widely used and implemented method, called implied weighting, follows from Goloboff (1993). The first time a character changes state on a tree, this state change is given the weight '1'; subsequent changes are less 'expensive' and are given smaller weights as the characters tendency for homoplasy becomes more apparent. The trees which maximize the concave function of homoplasy resolve character conflict in favour of the characters which have more homology (less homoplasy) and imply that the average weight for the characters is as high as possible. Goloboff recognizes that trees with the heaviest average weights give the most 'respect' to the data: a low average weight implies that most characters are being 'ignored' by the tree-building algorithms. Though originally proposed with a severe weighting of k=3, Goloboff now prefers more 'gentle' concavities (e.g. k = 12), which have been shown to be more effective in simulated and real-world cases.
xsd:nonNegativeInteger 3663

data from the linked data cloud